Test Report: Docker_Linux_containerd_arm64 22179

                    
                      505b1c9a8fd96db2c5d776a2dde7c3c6efd2d048:2025-12-22:42914
                    
                

Test fail (34/421)

Order failed test Duration
171 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy 501.67
173 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart 368.09
175 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods 2.28
185 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd 2.23
186 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly 2.29
187 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig 733.34
188 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth 2.21
191 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService 0.06
194 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd 1.76
197 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd 3.2
201 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect 2.45
203 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim 241.65
213 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels 1.46
219 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel 0.53
222 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/Setup 0.12
223 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect 126.1
228 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp 0.05
229 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List 0.27
230 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput 0.25
231 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS 0.26
232 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format 0.26
233 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL 0.25
237 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port 2.19
358 TestKubernetesUpgrade 796.88
429 TestStartStop/group/no-preload/serial/FirstStart 513.48
440 TestStartStop/group/newest-cni/serial/FirstStart 502.91
441 TestStartStop/group/no-preload/serial/DeployApp 3.01
442 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 103.01
445 TestStartStop/group/no-preload/serial/SecondStart 370.75
447 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 97.77
450 TestStartStop/group/newest-cni/serial/SecondStart 372.05
451 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 542.4
455 TestStartStop/group/newest-cni/serial/Pause 9.55
486 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 267.57
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy (501.67s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-973657 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1
E1222 00:14:13.002843 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/addons-984861/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:16:29.153934 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/addons-984861/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:16:56.849136 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/addons-984861/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:18:07.827857 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-722318/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:18:07.833192 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-722318/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:18:07.843564 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-722318/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:18:07.863887 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-722318/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:18:07.904281 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-722318/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:18:07.984676 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-722318/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:18:08.144936 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-722318/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:18:08.465550 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-722318/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:18:09.106568 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-722318/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:18:10.386867 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-722318/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:18:12.947189 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-722318/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:18:18.067916 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-722318/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:18:28.308151 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-722318/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:18:48.788416 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-722318/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:19:29.749117 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-722318/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:20:51.671033 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-722318/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:21:29.153667 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/addons-984861/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-973657 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1: exit status 109 (8m20.197079034s)

                                                
                                                
-- stdout --
	* [functional-973657] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22179
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22179-1395000/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1395000/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "functional-973657" primary control-plane node in "functional-973657" cluster
	* Pulling base image v0.0.48-1766219634-22260 ...
	* Found network options:
	  - HTTP_PROXY=localhost:38127
	* Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	* Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:38127 to docker env.
	! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-973657 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-973657 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000284266s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001264351s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001264351s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
functional_test.go:2241: failed minikube start. args "out/minikube-linux-arm64 start -p functional-973657 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1": exit status 109
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-973657
helpers_test.go:244: (dbg) docker inspect functional-973657:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "66803363da2c02a83814dda1c0764d3abdab5acc630ac08f6a997102221d51a1",
	        "Created": "2025-12-22T00:13:58.968084222Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1435135,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-22T00:13:59.032592154Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:065a636b8735485f57df2b02ed6532902f189a9c5dc304ae0ae68a778e1c9b2c",
	        "ResolvConfPath": "/var/lib/docker/containers/66803363da2c02a83814dda1c0764d3abdab5acc630ac08f6a997102221d51a1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/66803363da2c02a83814dda1c0764d3abdab5acc630ac08f6a997102221d51a1/hostname",
	        "HostsPath": "/var/lib/docker/containers/66803363da2c02a83814dda1c0764d3abdab5acc630ac08f6a997102221d51a1/hosts",
	        "LogPath": "/var/lib/docker/containers/66803363da2c02a83814dda1c0764d3abdab5acc630ac08f6a997102221d51a1/66803363da2c02a83814dda1c0764d3abdab5acc630ac08f6a997102221d51a1-json.log",
	        "Name": "/functional-973657",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-973657:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-973657",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "66803363da2c02a83814dda1c0764d3abdab5acc630ac08f6a997102221d51a1",
	                "LowerDir": "/var/lib/docker/overlay2/5e1ad55fc7940958673405b2a5d9d7701d300a0b94ebc0c871b8eb28331634c7-init/diff:/var/lib/docker/overlay2/fdfc0dccbb3b6766b7981a5669907f62e120bbe774767ccbee34e6115374625f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5e1ad55fc7940958673405b2a5d9d7701d300a0b94ebc0c871b8eb28331634c7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5e1ad55fc7940958673405b2a5d9d7701d300a0b94ebc0c871b8eb28331634c7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5e1ad55fc7940958673405b2a5d9d7701d300a0b94ebc0c871b8eb28331634c7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-973657",
	                "Source": "/var/lib/docker/volumes/functional-973657/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-973657",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-973657",
	                "name.minikube.sigs.k8s.io": "functional-973657",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9a7415dc7cc8c69d402ee20ae768f9dea6a8f4f19a78dc21532ae8d42f3e7899",
	            "SandboxKey": "/var/run/docker/netns/9a7415dc7cc8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38390"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38391"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38394"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38392"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38393"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-973657": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ee:06:b1:ad:4a:31",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1e42e4d66b505bfd04d5446c52717be20340997733545e1d4203ef38f80c0dbb",
	                    "EndpointID": "cfb1e4a2d5409c3c16cc85466e1253884fb8124967c81f53a3b06011e2792928",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-973657",
	                        "66803363da2c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-973657 -n functional-973657
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-973657 -n functional-973657: exit status 6 (303.933152ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1222 00:22:14.550442 1440313 status.go:458] kubeconfig endpoint: get endpoint: "functional-973657" does not appear in /home/jenkins/minikube-integration/22179-1395000/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                         ARGS                                                                          │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-722318 ssh sudo umount -f /mount-9p                                                                                                        │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ 22 Dec 25 00:13 UTC │
	│ ssh            │ functional-722318 ssh findmnt -T /mount-9p | grep 9p                                                                                                  │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │                     │
	│ mount          │ -p functional-722318 /tmp/TestFunctionalparallelMountCmdspecific-port4022870112/001:/mount-9p --alsologtostderr -v=1 --port 45835                     │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │                     │
	│ ssh            │ functional-722318 ssh findmnt -T /mount-9p | grep 9p                                                                                                  │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ 22 Dec 25 00:13 UTC │
	│ ssh            │ functional-722318 ssh -- ls -la /mount-9p                                                                                                             │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ 22 Dec 25 00:13 UTC │
	│ ssh            │ functional-722318 ssh sudo umount -f /mount-9p                                                                                                        │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │                     │
	│ mount          │ -p functional-722318 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1841505609/001:/mount1 --alsologtostderr -v=1                                    │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │                     │
	│ mount          │ -p functional-722318 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1841505609/001:/mount2 --alsologtostderr -v=1                                    │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │                     │
	│ mount          │ -p functional-722318 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1841505609/001:/mount3 --alsologtostderr -v=1                                    │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │                     │
	│ ssh            │ functional-722318 ssh findmnt -T /mount1                                                                                                              │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ 22 Dec 25 00:13 UTC │
	│ ssh            │ functional-722318 ssh findmnt -T /mount2                                                                                                              │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ 22 Dec 25 00:13 UTC │
	│ ssh            │ functional-722318 ssh findmnt -T /mount3                                                                                                              │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ 22 Dec 25 00:13 UTC │
	│ mount          │ -p functional-722318 --kill=true                                                                                                                      │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │                     │
	│ update-context │ functional-722318 update-context --alsologtostderr -v=2                                                                                               │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ 22 Dec 25 00:13 UTC │
	│ update-context │ functional-722318 update-context --alsologtostderr -v=2                                                                                               │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ 22 Dec 25 00:13 UTC │
	│ update-context │ functional-722318 update-context --alsologtostderr -v=2                                                                                               │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ 22 Dec 25 00:13 UTC │
	│ image          │ functional-722318 image ls --format short --alsologtostderr                                                                                           │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ 22 Dec 25 00:13 UTC │
	│ image          │ functional-722318 image ls --format yaml --alsologtostderr                                                                                            │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ 22 Dec 25 00:13 UTC │
	│ ssh            │ functional-722318 ssh pgrep buildkitd                                                                                                                 │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │                     │
	│ image          │ functional-722318 image build -t localhost/my-image:functional-722318 testdata/build --alsologtostderr                                                │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ 22 Dec 25 00:13 UTC │
	│ image          │ functional-722318 image ls --format json --alsologtostderr                                                                                            │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ 22 Dec 25 00:13 UTC │
	│ image          │ functional-722318 image ls --format table --alsologtostderr                                                                                           │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ 22 Dec 25 00:13 UTC │
	│ image          │ functional-722318 image ls                                                                                                                            │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ 22 Dec 25 00:13 UTC │
	│ delete         │ -p functional-722318                                                                                                                                  │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ 22 Dec 25 00:13 UTC │
	│ start          │ -p functional-973657 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/22 00:13:54
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1222 00:13:54.080388 1434747 out.go:360] Setting OutFile to fd 1 ...
	I1222 00:13:54.080481 1434747 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:13:54.080485 1434747 out.go:374] Setting ErrFile to fd 2...
	I1222 00:13:54.080496 1434747 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:13:54.080860 1434747 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1395000/.minikube/bin
	I1222 00:13:54.081436 1434747 out.go:368] Setting JSON to false
	I1222 00:13:54.082898 1434747 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":111387,"bootTime":1766251047,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1222 00:13:54.082985 1434747 start.go:143] virtualization:  
	I1222 00:13:54.087427 1434747 out.go:179] * [functional-973657] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1222 00:13:54.092042 1434747 out.go:179]   - MINIKUBE_LOCATION=22179
	I1222 00:13:54.092107 1434747 notify.go:221] Checking for updates...
	I1222 00:13:54.099346 1434747 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 00:13:54.102666 1434747 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-1395000/kubeconfig
	I1222 00:13:54.105930 1434747 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1395000/.minikube
	I1222 00:13:54.109197 1434747 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1222 00:13:54.112509 1434747 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 00:13:54.115849 1434747 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 00:13:54.146360 1434747 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1222 00:13:54.146473 1434747 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 00:13:54.206882 1434747 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-22 00:13:54.197215393 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 00:13:54.206980 1434747 docker.go:319] overlay module found
	I1222 00:13:54.210237 1434747 out.go:179] * Using the docker driver based on user configuration
	I1222 00:13:54.213269 1434747 start.go:309] selected driver: docker
	I1222 00:13:54.213279 1434747 start.go:928] validating driver "docker" against <nil>
	I1222 00:13:54.213292 1434747 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 00:13:54.214025 1434747 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 00:13:54.271430 1434747 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-22 00:13:54.261200801 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 00:13:54.271570 1434747 start_flags.go:329] no existing cluster config was found, will generate one from the flags 
	I1222 00:13:54.271783 1434747 start_flags.go:995] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1222 00:13:54.274774 1434747 out.go:179] * Using Docker driver with root privileges
	I1222 00:13:54.277559 1434747 cni.go:84] Creating CNI manager for ""
	I1222 00:13:54.277620 1434747 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1222 00:13:54.277628 1434747 start_flags.go:338] Found "CNI" CNI - setting NetworkPlugin=cni
	I1222 00:13:54.277696 1434747 start.go:353] cluster config:
	{Name:functional-973657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-973657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 00:13:54.280848 1434747 out.go:179] * Starting "functional-973657" primary control-plane node in "functional-973657" cluster
	I1222 00:13:54.283654 1434747 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1222 00:13:54.286673 1434747 out.go:179] * Pulling base image v0.0.48-1766219634-22260 ...
	I1222 00:13:54.289679 1434747 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1222 00:13:54.289705 1434747 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1222 00:13:54.289763 1434747 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4
	I1222 00:13:54.289771 1434747 cache.go:65] Caching tarball of preloaded images
	I1222 00:13:54.289852 1434747 preload.go:251] Found /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1222 00:13:54.289858 1434747 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on containerd
	I1222 00:13:54.290208 1434747 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/config.json ...
	I1222 00:13:54.290228 1434747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/config.json: {Name:mke91e43ab8a21d275c8837902e371c28943cb74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:13:54.308841 1434747 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon, skipping pull
	I1222 00:13:54.308856 1434747 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 exists in daemon, skipping load
	I1222 00:13:54.308869 1434747 cache.go:243] Successfully downloaded all kic artifacts
	I1222 00:13:54.308900 1434747 start.go:360] acquireMachinesLock for functional-973657: {Name:mk23c5c2a3abc92310900db50002bad061b76c2b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 00:13:54.309028 1434747 start.go:364] duration metric: took 112.871µs to acquireMachinesLock for "functional-973657"
	I1222 00:13:54.309058 1434747 start.go:93] Provisioning new machine with config: &{Name:functional-973657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-973657 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1222 00:13:54.309124 1434747 start.go:125] createHost starting for "" (driver="docker")
	I1222 00:13:54.312509 1434747 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	W1222 00:13:54.312818 1434747 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:38127 to docker env.
	I1222 00:13:54.312842 1434747 start.go:159] libmachine.API.Create for "functional-973657" (driver="docker")
	I1222 00:13:54.312867 1434747 client.go:173] LocalClient.Create starting
	I1222 00:13:54.312950 1434747 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem
	I1222 00:13:54.312984 1434747 main.go:144] libmachine: Decoding PEM data...
	I1222 00:13:54.312998 1434747 main.go:144] libmachine: Parsing certificate...
	I1222 00:13:54.313051 1434747 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem
	I1222 00:13:54.313071 1434747 main.go:144] libmachine: Decoding PEM data...
	I1222 00:13:54.313081 1434747 main.go:144] libmachine: Parsing certificate...
	I1222 00:13:54.313425 1434747 cli_runner.go:164] Run: docker network inspect functional-973657 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1222 00:13:54.330156 1434747 cli_runner.go:211] docker network inspect functional-973657 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1222 00:13:54.330236 1434747 network_create.go:284] running [docker network inspect functional-973657] to gather additional debugging logs...
	I1222 00:13:54.330250 1434747 cli_runner.go:164] Run: docker network inspect functional-973657
	W1222 00:13:54.346156 1434747 cli_runner.go:211] docker network inspect functional-973657 returned with exit code 1
	I1222 00:13:54.346175 1434747 network_create.go:287] error running [docker network inspect functional-973657]: docker network inspect functional-973657: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network functional-973657 not found
	I1222 00:13:54.346187 1434747 network_create.go:289] output of [docker network inspect functional-973657]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network functional-973657 not found
	
	** /stderr **
	I1222 00:13:54.346291 1434747 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 00:13:54.362622 1434747 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001924a20}
	I1222 00:13:54.362654 1434747 network_create.go:124] attempt to create docker network functional-973657 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1222 00:13:54.362715 1434747 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-973657 functional-973657
	I1222 00:13:54.422599 1434747 network_create.go:108] docker network functional-973657 192.168.49.0/24 created
	I1222 00:13:54.422631 1434747 kic.go:121] calculated static IP "192.168.49.2" for the "functional-973657" container
	I1222 00:13:54.422703 1434747 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1222 00:13:54.438726 1434747 cli_runner.go:164] Run: docker volume create functional-973657 --label name.minikube.sigs.k8s.io=functional-973657 --label created_by.minikube.sigs.k8s.io=true
	I1222 00:13:54.457240 1434747 oci.go:103] Successfully created a docker volume functional-973657
	I1222 00:13:54.457311 1434747 cli_runner.go:164] Run: docker run --rm --name functional-973657-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-973657 --entrypoint /usr/bin/test -v functional-973657:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -d /var/lib
	I1222 00:13:55.013277 1434747 oci.go:107] Successfully prepared a docker volume functional-973657
	I1222 00:13:55.013347 1434747 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1222 00:13:55.013356 1434747 kic.go:194] Starting extracting preloaded images to volume ...
	I1222 00:13:55.013435 1434747 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v functional-973657:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -I lz4 -xf /preloaded.tar -C /extractDir
	I1222 00:13:58.883595 1434747 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v functional-973657:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -I lz4 -xf /preloaded.tar -C /extractDir: (3.870121267s)
	I1222 00:13:58.883614 1434747 kic.go:203] duration metric: took 3.87025524s to extract preloaded images to volume ...
	W1222 00:13:58.883773 1434747 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1222 00:13:58.883880 1434747 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1222 00:13:58.950629 1434747 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname functional-973657 --name functional-973657 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-973657 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=functional-973657 --network functional-973657 --ip 192.168.49.2 --volume functional-973657:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8441 --publish=127.0.0.1::8441 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5
	I1222 00:13:59.256535 1434747 cli_runner.go:164] Run: docker container inspect functional-973657 --format={{.State.Running}}
	I1222 00:13:59.278641 1434747 cli_runner.go:164] Run: docker container inspect functional-973657 --format={{.State.Status}}
	I1222 00:13:59.309334 1434747 cli_runner.go:164] Run: docker exec functional-973657 stat /var/lib/dpkg/alternatives/iptables
	I1222 00:13:59.364524 1434747 oci.go:144] the created container "functional-973657" has a running status.
	I1222 00:13:59.364543 1434747 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/functional-973657/id_rsa...
	I1222 00:13:59.465321 1434747 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/functional-973657/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1222 00:13:59.489428 1434747 cli_runner.go:164] Run: docker container inspect functional-973657 --format={{.State.Status}}
	I1222 00:13:59.521106 1434747 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1222 00:13:59.521117 1434747 kic_runner.go:114] Args: [docker exec --privileged functional-973657 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1222 00:13:59.567451 1434747 cli_runner.go:164] Run: docker container inspect functional-973657 --format={{.State.Status}}
	I1222 00:13:59.592058 1434747 machine.go:94] provisionDockerMachine start ...
	I1222 00:13:59.592149 1434747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
	I1222 00:13:59.612600 1434747 main.go:144] libmachine: Using SSH client type: native
	I1222 00:13:59.612961 1434747 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38390 <nil> <nil>}
	I1222 00:13:59.612968 1434747 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 00:13:59.613675 1434747 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1222 00:14:02.750311 1434747 main.go:144] libmachine: SSH cmd err, output: <nil>: functional-973657
	
	I1222 00:14:02.750327 1434747 ubuntu.go:182] provisioning hostname "functional-973657"
	I1222 00:14:02.750403 1434747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
	I1222 00:14:02.768314 1434747 main.go:144] libmachine: Using SSH client type: native
	I1222 00:14:02.768621 1434747 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38390 <nil> <nil>}
	I1222 00:14:02.768630 1434747 main.go:144] libmachine: About to run SSH command:
	sudo hostname functional-973657 && echo "functional-973657" | sudo tee /etc/hostname
	I1222 00:14:02.911259 1434747 main.go:144] libmachine: SSH cmd err, output: <nil>: functional-973657
	
	I1222 00:14:02.911346 1434747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
	I1222 00:14:02.930003 1434747 main.go:144] libmachine: Using SSH client type: native
	I1222 00:14:02.930327 1434747 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38390 <nil> <nil>}
	I1222 00:14:02.930343 1434747 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-973657' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-973657/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-973657' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1222 00:14:03.062333 1434747 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 00:14:03.062352 1434747 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22179-1395000/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-1395000/.minikube}
	I1222 00:14:03.062399 1434747 ubuntu.go:190] setting up certificates
	I1222 00:14:03.062410 1434747 provision.go:84] configureAuth start
	I1222 00:14:03.062478 1434747 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-973657
	I1222 00:14:03.079790 1434747 provision.go:143] copyHostCerts
	I1222 00:14:03.079863 1434747 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.pem, removing ...
	I1222 00:14:03.079870 1434747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.pem
	I1222 00:14:03.079944 1434747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.pem (1082 bytes)
	I1222 00:14:03.080042 1434747 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1395000/.minikube/cert.pem, removing ...
	I1222 00:14:03.080045 1434747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1395000/.minikube/cert.pem
	I1222 00:14:03.080070 1434747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-1395000/.minikube/cert.pem (1123 bytes)
	I1222 00:14:03.080125 1434747 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1395000/.minikube/key.pem, removing ...
	I1222 00:14:03.080128 1434747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1395000/.minikube/key.pem
	I1222 00:14:03.080150 1434747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-1395000/.minikube/key.pem (1679 bytes)
	I1222 00:14:03.080238 1434747 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca-key.pem org=jenkins.functional-973657 san=[127.0.0.1 192.168.49.2 functional-973657 localhost minikube]
	I1222 00:14:03.328502 1434747 provision.go:177] copyRemoteCerts
	I1222 00:14:03.328563 1434747 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1222 00:14:03.328603 1434747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
	I1222 00:14:03.347011 1434747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38390 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/functional-973657/id_rsa Username:docker}
	I1222 00:14:03.441755 1434747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1222 00:14:03.458877 1434747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1222 00:14:03.475601 1434747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1222 00:14:03.492713 1434747 provision.go:87] duration metric: took 430.279769ms to configureAuth
	I1222 00:14:03.492730 1434747 ubuntu.go:206] setting minikube options for container-runtime
	I1222 00:14:03.492928 1434747 config.go:182] Loaded profile config "functional-973657": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1222 00:14:03.492936 1434747 machine.go:97] duration metric: took 3.900869097s to provisionDockerMachine
	I1222 00:14:03.492942 1434747 client.go:176] duration metric: took 9.180070402s to LocalClient.Create
	I1222 00:14:03.492963 1434747 start.go:167] duration metric: took 9.180120208s to libmachine.API.Create "functional-973657"
	I1222 00:14:03.492970 1434747 start.go:293] postStartSetup for "functional-973657" (driver="docker")
	I1222 00:14:03.492979 1434747 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1222 00:14:03.493041 1434747 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1222 00:14:03.493078 1434747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
	I1222 00:14:03.511078 1434747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38390 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/functional-973657/id_rsa Username:docker}
	I1222 00:14:03.610118 1434747 ssh_runner.go:195] Run: cat /etc/os-release
	I1222 00:14:03.613490 1434747 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1222 00:14:03.613519 1434747 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1222 00:14:03.613529 1434747 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1395000/.minikube/addons for local assets ...
	I1222 00:14:03.613587 1434747 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1395000/.minikube/files for local assets ...
	I1222 00:14:03.613674 1434747 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem -> 13968642.pem in /etc/ssl/certs
	I1222 00:14:03.613754 1434747 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/test/nested/copy/1396864/hosts -> hosts in /etc/test/nested/copy/1396864
	I1222 00:14:03.613801 1434747 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1396864
	I1222 00:14:03.621527 1434747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem --> /etc/ssl/certs/13968642.pem (1708 bytes)
	I1222 00:14:03.639649 1434747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/test/nested/copy/1396864/hosts --> /etc/test/nested/copy/1396864/hosts (40 bytes)
	I1222 00:14:03.656727 1434747 start.go:296] duration metric: took 163.743932ms for postStartSetup
	I1222 00:14:03.657123 1434747 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-973657
	I1222 00:14:03.674188 1434747 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/config.json ...
	I1222 00:14:03.674454 1434747 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 00:14:03.674492 1434747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
	I1222 00:14:03.691654 1434747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38390 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/functional-973657/id_rsa Username:docker}
	I1222 00:14:03.786935 1434747 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1222 00:14:03.791473 1434747 start.go:128] duration metric: took 9.482336534s to createHost
	I1222 00:14:03.791513 1434747 start.go:83] releasing machines lock for "functional-973657", held for 9.482466676s
	I1222 00:14:03.791597 1434747 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-973657
	I1222 00:14:03.812998 1434747 out.go:179] * Found network options:
	I1222 00:14:03.815917 1434747 out.go:179]   - HTTP_PROXY=localhost:38127
	W1222 00:14:03.818791 1434747 out.go:285] ! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	I1222 00:14:03.821578 1434747 out.go:179] * Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	I1222 00:14:03.824479 1434747 ssh_runner.go:195] Run: cat /version.json
	I1222 00:14:03.824525 1434747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
	I1222 00:14:03.824553 1434747 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1222 00:14:03.824607 1434747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
	I1222 00:14:03.844079 1434747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38390 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/functional-973657/id_rsa Username:docker}
	I1222 00:14:03.855729 1434747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38390 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/functional-973657/id_rsa Username:docker}
	I1222 00:14:04.033533 1434747 ssh_runner.go:195] Run: systemctl --version
	I1222 00:14:04.040311 1434747 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1222 00:14:04.044648 1434747 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1222 00:14:04.044717 1434747 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1222 00:14:04.072385 1434747 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1222 00:14:04.072399 1434747 start.go:496] detecting cgroup driver to use...
	I1222 00:14:04.072464 1434747 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 00:14:04.072542 1434747 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1222 00:14:04.088149 1434747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1222 00:14:04.101314 1434747 docker.go:218] disabling cri-docker service (if available) ...
	I1222 00:14:04.101377 1434747 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1222 00:14:04.119632 1434747 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1222 00:14:04.138566 1434747 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1222 00:14:04.261420 1434747 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1222 00:14:04.384274 1434747 docker.go:234] disabling docker service ...
	I1222 00:14:04.384399 1434747 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1222 00:14:04.408049 1434747 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1222 00:14:04.422659 1434747 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1222 00:14:04.546222 1434747 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1222 00:14:04.677163 1434747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1222 00:14:04.690211 1434747 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 00:14:04.705148 1434747 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1222 00:14:04.714539 1434747 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1222 00:14:04.724063 1434747 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1222 00:14:04.724120 1434747 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1222 00:14:04.733018 1434747 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1222 00:14:04.741947 1434747 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1222 00:14:04.750900 1434747 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1222 00:14:04.760026 1434747 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1222 00:14:04.768055 1434747 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1222 00:14:04.776956 1434747 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1222 00:14:04.786324 1434747 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1222 00:14:04.795551 1434747 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1222 00:14:04.803390 1434747 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1222 00:14:04.811187 1434747 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 00:14:04.928877 1434747 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1222 00:14:05.074928 1434747 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1222 00:14:05.075008 1434747 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1222 00:14:05.079227 1434747 start.go:564] Will wait 60s for crictl version
	I1222 00:14:05.079304 1434747 ssh_runner.go:195] Run: which crictl
	I1222 00:14:05.083261 1434747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1222 00:14:05.112631 1434747 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.1
	RuntimeApiVersion:  v1
	I1222 00:14:05.112701 1434747 ssh_runner.go:195] Run: containerd --version
	I1222 00:14:05.135087 1434747 ssh_runner.go:195] Run: containerd --version
	I1222 00:14:05.160292 1434747 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.1 ...
	I1222 00:14:05.163313 1434747 cli_runner.go:164] Run: docker network inspect functional-973657 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 00:14:05.181060 1434747 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1222 00:14:05.185532 1434747 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 00:14:05.196784 1434747 kubeadm.go:884] updating cluster {Name:functional-973657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-973657 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1222 00:14:05.196908 1434747 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1222 00:14:05.196976 1434747 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 00:14:05.223983 1434747 containerd.go:627] all images are preloaded for containerd runtime.
	I1222 00:14:05.223996 1434747 containerd.go:534] Images already preloaded, skipping extraction
	I1222 00:14:05.224059 1434747 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 00:14:05.248737 1434747 containerd.go:627] all images are preloaded for containerd runtime.
	I1222 00:14:05.248749 1434747 cache_images.go:86] Images are preloaded, skipping loading
	I1222 00:14:05.248755 1434747 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 containerd true true} ...
	I1222 00:14:05.248868 1434747 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-973657 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-973657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1222 00:14:05.248939 1434747 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
	I1222 00:14:05.273499 1434747 cni.go:84] Creating CNI manager for ""
	I1222 00:14:05.273509 1434747 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1222 00:14:05.273530 1434747 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1222 00:14:05.273551 1434747 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-973657 NodeName:functional-973657 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1222 00:14:05.273664 1434747 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-973657"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1222 00:14:05.273736 1434747 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1222 00:14:05.281614 1434747 binaries.go:51] Found k8s binaries, skipping transfer
	I1222 00:14:05.281686 1434747 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1222 00:14:05.289484 1434747 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1222 00:14:05.302693 1434747 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1222 00:14:05.316167 1434747 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1222 00:14:05.329431 1434747 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1222 00:14:05.333276 1434747 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 00:14:05.344430 1434747 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 00:14:05.454216 1434747 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 00:14:05.470647 1434747 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657 for IP: 192.168.49.2
	I1222 00:14:05.470657 1434747 certs.go:195] generating shared ca certs ...
	I1222 00:14:05.470671 1434747 certs.go:227] acquiring lock for ca certs: {Name:mk4e1172e73c8d9b926824a39d7e920772302ed7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:14:05.470821 1434747 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.key
	I1222 00:14:05.470862 1434747 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.key
	I1222 00:14:05.470867 1434747 certs.go:257] generating profile certs ...
	I1222 00:14:05.470924 1434747 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/client.key
	I1222 00:14:05.470934 1434747 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/client.crt with IP's: []
	I1222 00:14:05.592301 1434747 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/client.crt ...
	I1222 00:14:05.592323 1434747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/client.crt: {Name:mk2008a9f32332b0a767a07c3d3596e331ba3c34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:14:05.592530 1434747 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/client.key ...
	I1222 00:14:05.592536 1434747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/client.key: {Name:mkfa9740b6557facdd822dc4551cf4042fd71055 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:14:05.592627 1434747 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/apiserver.key.ec70d081
	I1222 00:14:05.592638 1434747 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/apiserver.crt.ec70d081 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1222 00:14:05.833387 1434747 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/apiserver.crt.ec70d081 ...
	I1222 00:14:05.833402 1434747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/apiserver.crt.ec70d081: {Name:mk3271dbccbe2a4dc365c84e739d17aefc5b2369 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:14:05.833603 1434747 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/apiserver.key.ec70d081 ...
	I1222 00:14:05.833615 1434747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/apiserver.key.ec70d081: {Name:mkdada2538e53460dbc1c20a3031cc8f693c8bd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:14:05.833707 1434747 certs.go:382] copying /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/apiserver.crt.ec70d081 -> /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/apiserver.crt
	I1222 00:14:05.833787 1434747 certs.go:386] copying /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/apiserver.key.ec70d081 -> /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/apiserver.key
	I1222 00:14:05.833840 1434747 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/proxy-client.key
	I1222 00:14:05.833852 1434747 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/proxy-client.crt with IP's: []
	I1222 00:14:06.387999 1434747 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/proxy-client.crt ...
	I1222 00:14:06.388016 1434747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/proxy-client.crt: {Name:mk402bafb7d5e77d7080a6a362b82e18cc99f2e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:14:06.388215 1434747 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/proxy-client.key ...
	I1222 00:14:06.388224 1434747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/proxy-client.key: {Name:mkc90908b315943a306e167fccdf777e14bf27fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:14:06.388409 1434747 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/1396864.pem (1338 bytes)
	W1222 00:14:06.388455 1434747 certs.go:480] ignoring /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/1396864_empty.pem, impossibly tiny 0 bytes
	I1222 00:14:06.388463 1434747 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca-key.pem (1675 bytes)
	I1222 00:14:06.388493 1434747 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem (1082 bytes)
	I1222 00:14:06.388517 1434747 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem (1123 bytes)
	I1222 00:14:06.388540 1434747 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/key.pem (1679 bytes)
	I1222 00:14:06.388584 1434747 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem (1708 bytes)
	I1222 00:14:06.389160 1434747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1222 00:14:06.409771 1434747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1222 00:14:06.429468 1434747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1222 00:14:06.448353 1434747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1222 00:14:06.467407 1434747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1222 00:14:06.485305 1434747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1222 00:14:06.504007 1434747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1222 00:14:06.523900 1434747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1222 00:14:06.542211 1434747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem --> /usr/share/ca-certificates/13968642.pem (1708 bytes)
	I1222 00:14:06.561160 1434747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1222 00:14:06.580129 1434747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/1396864.pem --> /usr/share/ca-certificates/1396864.pem (1338 bytes)
	I1222 00:14:06.598187 1434747 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1222 00:14:06.611358 1434747 ssh_runner.go:195] Run: openssl version
	I1222 00:14:06.617865 1434747 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/13968642.pem
	I1222 00:14:06.625738 1434747 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/13968642.pem /etc/ssl/certs/13968642.pem
	I1222 00:14:06.633633 1434747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13968642.pem
	I1222 00:14:06.637729 1434747 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 00:13 /usr/share/ca-certificates/13968642.pem
	I1222 00:14:06.637801 1434747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13968642.pem
	I1222 00:14:06.679835 1434747 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1222 00:14:06.687530 1434747 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/13968642.pem /etc/ssl/certs/3ec20f2e.0
	I1222 00:14:06.695222 1434747 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:14:06.702721 1434747 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1222 00:14:06.710446 1434747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:14:06.714802 1434747 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 00:04 /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:14:06.714864 1434747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:14:06.757681 1434747 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1222 00:14:06.765501 1434747 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1222 00:14:06.773216 1434747 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1396864.pem
	I1222 00:14:06.780813 1434747 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1396864.pem /etc/ssl/certs/1396864.pem
	I1222 00:14:06.789077 1434747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1396864.pem
	I1222 00:14:06.792836 1434747 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 00:13 /usr/share/ca-certificates/1396864.pem
	I1222 00:14:06.792919 1434747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1396864.pem
	I1222 00:14:06.837792 1434747 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1222 00:14:06.845379 1434747 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1396864.pem /etc/ssl/certs/51391683.0
	I1222 00:14:06.852862 1434747 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 00:14:06.856390 1434747 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1222 00:14:06.856434 1434747 kubeadm.go:401] StartCluster: {Name:functional-973657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-973657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 00:14:06.856503 1434747 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1222 00:14:06.856569 1434747 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 00:14:06.884928 1434747 cri.go:96] found id: ""
	I1222 00:14:06.885003 1434747 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1222 00:14:06.892959 1434747 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1222 00:14:06.900700 1434747 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1222 00:14:06.900762 1434747 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 00:14:06.908686 1434747 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1222 00:14:06.908699 1434747 kubeadm.go:158] found existing configuration files:
	
	I1222 00:14:06.908748 1434747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1222 00:14:06.916410 1434747 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1222 00:14:06.916466 1434747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1222 00:14:06.924171 1434747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1222 00:14:06.932334 1434747 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1222 00:14:06.932392 1434747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 00:14:06.940270 1434747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1222 00:14:06.948186 1434747 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1222 00:14:06.948243 1434747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 00:14:06.955886 1434747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1222 00:14:06.963816 1434747 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1222 00:14:06.963872 1434747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 00:14:06.971497 1434747 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1222 00:14:07.038574 1434747 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1222 00:14:07.039052 1434747 kubeadm.go:319] [preflight] Running pre-flight checks
	I1222 00:14:07.114570 1434747 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1222 00:14:07.114641 1434747 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1222 00:14:07.114675 1434747 kubeadm.go:319] OS: Linux
	I1222 00:14:07.114743 1434747 kubeadm.go:319] CGROUPS_CPU: enabled
	I1222 00:14:07.114807 1434747 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1222 00:14:07.114854 1434747 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1222 00:14:07.114901 1434747 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1222 00:14:07.114966 1434747 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1222 00:14:07.115023 1434747 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1222 00:14:07.115075 1434747 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1222 00:14:07.115123 1434747 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1222 00:14:07.115168 1434747 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1222 00:14:07.187029 1434747 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1222 00:14:07.187134 1434747 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1222 00:14:07.187225 1434747 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1222 00:14:07.194520 1434747 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1222 00:14:07.200872 1434747 out.go:252]   - Generating certificates and keys ...
	I1222 00:14:07.200992 1434747 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1222 00:14:07.201058 1434747 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1222 00:14:07.471321 1434747 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1222 00:14:07.831434 1434747 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1222 00:14:07.944775 1434747 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1222 00:14:08.198934 1434747 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1222 00:14:08.637875 1434747 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1222 00:14:08.638218 1434747 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [functional-973657 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1222 00:14:08.719954 1434747 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1222 00:14:08.720288 1434747 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [functional-973657 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1222 00:14:08.778606 1434747 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1222 00:14:08.944050 1434747 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1222 00:14:09.171967 1434747 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1222 00:14:09.172092 1434747 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1222 00:14:09.263026 1434747 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1222 00:14:09.523345 1434747 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1222 00:14:10.037237 1434747 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1222 00:14:10.849577 1434747 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1222 00:14:11.642270 1434747 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1222 00:14:11.643007 1434747 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1222 00:14:11.647842 1434747 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1222 00:14:11.651572 1434747 out.go:252]   - Booting up control plane ...
	I1222 00:14:11.651677 1434747 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1222 00:14:11.651764 1434747 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1222 00:14:11.652321 1434747 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1222 00:14:11.668970 1434747 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1222 00:14:11.669271 1434747 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1222 00:14:11.677212 1434747 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1222 00:14:11.677475 1434747 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1222 00:14:11.677516 1434747 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1222 00:14:11.814736 1434747 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1222 00:14:11.814848 1434747 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1222 00:18:11.810523 1434747 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000284266s
	I1222 00:18:11.810543 1434747 kubeadm.go:319] 
	I1222 00:18:11.810599 1434747 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1222 00:18:11.810631 1434747 kubeadm.go:319] 	- The kubelet is not running
	I1222 00:18:11.810735 1434747 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1222 00:18:11.810738 1434747 kubeadm.go:319] 
	I1222 00:18:11.810842 1434747 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1222 00:18:11.810872 1434747 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1222 00:18:11.810902 1434747 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1222 00:18:11.810905 1434747 kubeadm.go:319] 
	I1222 00:18:11.816392 1434747 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1222 00:18:11.816886 1434747 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1222 00:18:11.817005 1434747 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1222 00:18:11.817260 1434747 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1222 00:18:11.817264 1434747 kubeadm.go:319] 
	W1222 00:18:11.817461 1434747 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-973657 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-973657 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000284266s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1222 00:18:11.817551 1434747 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1222 00:18:11.818136 1434747 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1222 00:18:12.224633 1434747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 00:18:12.237999 1434747 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1222 00:18:12.238053 1434747 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 00:18:12.246163 1434747 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1222 00:18:12.246172 1434747 kubeadm.go:158] found existing configuration files:
	
	I1222 00:18:12.246225 1434747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1222 00:18:12.254057 1434747 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1222 00:18:12.254133 1434747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1222 00:18:12.261335 1434747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1222 00:18:12.269033 1434747 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1222 00:18:12.269088 1434747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 00:18:12.276596 1434747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1222 00:18:12.284147 1434747 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1222 00:18:12.284200 1434747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 00:18:12.291464 1434747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1222 00:18:12.298844 1434747 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1222 00:18:12.298907 1434747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 00:18:12.306236 1434747 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1222 00:18:12.342156 1434747 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1222 00:18:12.342204 1434747 kubeadm.go:319] [preflight] Running pre-flight checks
	I1222 00:18:12.415534 1434747 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1222 00:18:12.415599 1434747 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1222 00:18:12.415633 1434747 kubeadm.go:319] OS: Linux
	I1222 00:18:12.415678 1434747 kubeadm.go:319] CGROUPS_CPU: enabled
	I1222 00:18:12.415725 1434747 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1222 00:18:12.415772 1434747 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1222 00:18:12.415828 1434747 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1222 00:18:12.415876 1434747 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1222 00:18:12.415928 1434747 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1222 00:18:12.415972 1434747 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1222 00:18:12.416018 1434747 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1222 00:18:12.416064 1434747 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1222 00:18:12.482634 1434747 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1222 00:18:12.482785 1434747 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1222 00:18:12.482888 1434747 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1222 00:18:12.494592 1434747 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1222 00:18:12.499673 1434747 out.go:252]   - Generating certificates and keys ...
	I1222 00:18:12.499775 1434747 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1222 00:18:12.499844 1434747 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1222 00:18:12.499940 1434747 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1222 00:18:12.500008 1434747 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1222 00:18:12.500082 1434747 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1222 00:18:12.500140 1434747 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1222 00:18:12.500210 1434747 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1222 00:18:12.500279 1434747 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1222 00:18:12.500369 1434747 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1222 00:18:12.500470 1434747 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1222 00:18:12.500519 1434747 kubeadm.go:319] [certs] Using the existing "sa" key
	I1222 00:18:12.500579 1434747 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1222 00:18:12.662860 1434747 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1222 00:18:13.066868 1434747 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1222 00:18:13.350920 1434747 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1222 00:18:13.534793 1434747 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1222 00:18:13.594486 1434747 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1222 00:18:13.595073 1434747 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1222 00:18:13.597621 1434747 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1222 00:18:13.600727 1434747 out.go:252]   - Booting up control plane ...
	I1222 00:18:13.600826 1434747 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1222 00:18:13.600912 1434747 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1222 00:18:13.601527 1434747 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1222 00:18:13.621978 1434747 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1222 00:18:13.622109 1434747 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1222 00:18:13.631022 1434747 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1222 00:18:13.631970 1434747 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1222 00:18:13.632083 1434747 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1222 00:18:13.775225 1434747 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1222 00:18:13.775337 1434747 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1222 00:22:13.776476 1434747 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001264351s
	I1222 00:22:13.776503 1434747 kubeadm.go:319] 
	I1222 00:22:13.776569 1434747 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1222 00:22:13.776626 1434747 kubeadm.go:319] 	- The kubelet is not running
	I1222 00:22:13.776735 1434747 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1222 00:22:13.776739 1434747 kubeadm.go:319] 
	I1222 00:22:13.776843 1434747 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1222 00:22:13.776873 1434747 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1222 00:22:13.776902 1434747 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1222 00:22:13.776907 1434747 kubeadm.go:319] 
	I1222 00:22:13.781004 1434747 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1222 00:22:13.781415 1434747 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1222 00:22:13.781523 1434747 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1222 00:22:13.781757 1434747 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1222 00:22:13.781761 1434747 kubeadm.go:319] 
	I1222 00:22:13.781829 1434747 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1222 00:22:13.781894 1434747 kubeadm.go:403] duration metric: took 8m6.925464632s to StartCluster
	I1222 00:22:13.781926 1434747 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:22:13.781991 1434747 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:22:13.807444 1434747 cri.go:96] found id: ""
	I1222 00:22:13.807466 1434747 logs.go:282] 0 containers: []
	W1222 00:22:13.807473 1434747 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:22:13.807479 1434747 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:22:13.807537 1434747 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:22:13.831579 1434747 cri.go:96] found id: ""
	I1222 00:22:13.831593 1434747 logs.go:282] 0 containers: []
	W1222 00:22:13.831600 1434747 logs.go:284] No container was found matching "etcd"
	I1222 00:22:13.831606 1434747 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:22:13.831670 1434747 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:22:13.857008 1434747 cri.go:96] found id: ""
	I1222 00:22:13.857024 1434747 logs.go:282] 0 containers: []
	W1222 00:22:13.857031 1434747 logs.go:284] No container was found matching "coredns"
	I1222 00:22:13.857036 1434747 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:22:13.857096 1434747 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:22:13.883759 1434747 cri.go:96] found id: ""
	I1222 00:22:13.883772 1434747 logs.go:282] 0 containers: []
	W1222 00:22:13.883778 1434747 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:22:13.883784 1434747 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:22:13.883841 1434747 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:22:13.912604 1434747 cri.go:96] found id: ""
	I1222 00:22:13.912618 1434747 logs.go:282] 0 containers: []
	W1222 00:22:13.912625 1434747 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:22:13.912630 1434747 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:22:13.912686 1434747 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:22:13.941261 1434747 cri.go:96] found id: ""
	I1222 00:22:13.941275 1434747 logs.go:282] 0 containers: []
	W1222 00:22:13.941287 1434747 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:22:13.941292 1434747 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:22:13.941351 1434747 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:22:13.967493 1434747 cri.go:96] found id: ""
	I1222 00:22:13.967508 1434747 logs.go:282] 0 containers: []
	W1222 00:22:13.967516 1434747 logs.go:284] No container was found matching "kindnet"
	I1222 00:22:13.967526 1434747 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:22:13.967539 1434747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:22:14.052311 1434747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:22:14.041625    4798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:22:14.042560    4798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:22:14.044927    4798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:22:14.046227    4798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:22:14.046628    4798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:22:14.041625    4798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:22:14.042560    4798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:22:14.044927    4798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:22:14.046227    4798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:22:14.046628    4798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:22:14.052322 1434747 logs.go:123] Gathering logs for containerd ...
	I1222 00:22:14.052332 1434747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:22:14.101192 1434747 logs.go:123] Gathering logs for container status ...
	I1222 00:22:14.101216 1434747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:22:14.129891 1434747 logs.go:123] Gathering logs for kubelet ...
	I1222 00:22:14.129911 1434747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:22:14.187654 1434747 logs.go:123] Gathering logs for dmesg ...
	I1222 00:22:14.187676 1434747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1222 00:22:14.203699 1434747 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001264351s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1222 00:22:14.203745 1434747 out.go:285] * 
	W1222 00:22:14.204002 1434747 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001264351s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1222 00:22:14.204248 1434747 out.go:285] * 
	W1222 00:22:14.206475 1434747 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1222 00:22:14.212598 1434747 out.go:203] 
	W1222 00:22:14.215480 1434747 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001264351s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1222 00:22:14.215530 1434747 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1222 00:22:14.215551 1434747 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1222 00:22:14.218629 1434747 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 22 00:14:05 functional-973657 containerd[764]: time="2025-12-22T00:14:05.013107696Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 22 00:14:05 functional-973657 containerd[764]: time="2025-12-22T00:14:05.013128776Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 22 00:14:05 functional-973657 containerd[764]: time="2025-12-22T00:14:05.013172493Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 22 00:14:05 functional-973657 containerd[764]: time="2025-12-22T00:14:05.013193432Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 22 00:14:05 functional-973657 containerd[764]: time="2025-12-22T00:14:05.013209375Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 22 00:14:05 functional-973657 containerd[764]: time="2025-12-22T00:14:05.013287521Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 22 00:14:05 functional-973657 containerd[764]: time="2025-12-22T00:14:05.013300576Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 22 00:14:05 functional-973657 containerd[764]: time="2025-12-22T00:14:05.013320358Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 22 00:14:05 functional-973657 containerd[764]: time="2025-12-22T00:14:05.013338861Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 22 00:14:05 functional-973657 containerd[764]: time="2025-12-22T00:14:05.013381118Z" level=info msg="Connect containerd service"
	Dec 22 00:14:05 functional-973657 containerd[764]: time="2025-12-22T00:14:05.013783410Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 22 00:14:05 functional-973657 containerd[764]: time="2025-12-22T00:14:05.014531461Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 22 00:14:05 functional-973657 containerd[764]: time="2025-12-22T00:14:05.032162884Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 22 00:14:05 functional-973657 containerd[764]: time="2025-12-22T00:14:05.032238388Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 22 00:14:05 functional-973657 containerd[764]: time="2025-12-22T00:14:05.032276173Z" level=info msg="Start subscribing containerd event"
	Dec 22 00:14:05 functional-973657 containerd[764]: time="2025-12-22T00:14:05.032330877Z" level=info msg="Start recovering state"
	Dec 22 00:14:05 functional-973657 containerd[764]: time="2025-12-22T00:14:05.071561572Z" level=info msg="Start event monitor"
	Dec 22 00:14:05 functional-973657 containerd[764]: time="2025-12-22T00:14:05.071763854Z" level=info msg="Start cni network conf syncer for default"
	Dec 22 00:14:05 functional-973657 containerd[764]: time="2025-12-22T00:14:05.071833344Z" level=info msg="Start streaming server"
	Dec 22 00:14:05 functional-973657 containerd[764]: time="2025-12-22T00:14:05.071894144Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 22 00:14:05 functional-973657 containerd[764]: time="2025-12-22T00:14:05.071949259Z" level=info msg="runtime interface starting up..."
	Dec 22 00:14:05 functional-973657 containerd[764]: time="2025-12-22T00:14:05.072003758Z" level=info msg="starting plugins..."
	Dec 22 00:14:05 functional-973657 containerd[764]: time="2025-12-22T00:14:05.072069350Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 22 00:14:05 functional-973657 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 22 00:14:05 functional-973657 containerd[764]: time="2025-12-22T00:14:05.074200081Z" level=info msg="containerd successfully booted in 0.091082s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:22:15.230343    4923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:22:15.231133    4923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:22:15.232769    4923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:22:15.233135    4923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:22:15.234652    4923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec21 21:36] overlayfs: idmapped layers are currently not supported
	[ +34.220408] overlayfs: idmapped layers are currently not supported
	[Dec21 21:37] overlayfs: idmapped layers are currently not supported
	[Dec21 21:38] overlayfs: idmapped layers are currently not supported
	[Dec21 21:39] overlayfs: idmapped layers are currently not supported
	[ +42.728151] overlayfs: idmapped layers are currently not supported
	[Dec21 21:40] overlayfs: idmapped layers are currently not supported
	[Dec21 21:42] overlayfs: idmapped layers are currently not supported
	[Dec21 21:43] overlayfs: idmapped layers are currently not supported
	[Dec21 22:01] overlayfs: idmapped layers are currently not supported
	[Dec21 22:03] overlayfs: idmapped layers are currently not supported
	[Dec21 22:04] overlayfs: idmapped layers are currently not supported
	[Dec21 22:06] overlayfs: idmapped layers are currently not supported
	[Dec21 22:07] overlayfs: idmapped layers are currently not supported
	[Dec21 22:09] kauditd_printk_skb: 8 callbacks suppressed
	[Dec21 22:19] FS-Cache: Duplicate cookie detected
	[  +0.000799] FS-Cache: O-cookie c=000001b7 [p=00000002 fl=222 nc=0 na=1]
	[  +0.000997] FS-Cache: O-cookie d=000000006644c6a1{9P.session} n=0000000059d48210
	[  +0.001156] FS-Cache: O-key=[10] '34333231303139373837'
	[  +0.000780] FS-Cache: N-cookie c=000001b8 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000956] FS-Cache: N-cookie d=000000006644c6a1{9P.session} n=000000007a8030ee
	[  +0.001187] FS-Cache: N-key=[10] '34333231303139373837'
	[Dec22 00:03] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 00:22:15 up 1 day,  7:04,  0 user,  load average: 0.26, 0.58, 1.23
	Linux functional-973657 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 22 00:22:11 functional-973657 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 00:22:12 functional-973657 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 318.
	Dec 22 00:22:12 functional-973657 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:22:12 functional-973657 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:22:12 functional-973657 kubelet[4729]: E1222 00:22:12.534194    4729 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 00:22:12 functional-973657 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 00:22:12 functional-973657 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 00:22:13 functional-973657 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 22 00:22:13 functional-973657 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:22:13 functional-973657 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:22:13 functional-973657 kubelet[4735]: E1222 00:22:13.284589    4735 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 00:22:13 functional-973657 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 00:22:13 functional-973657 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 00:22:13 functional-973657 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 22 00:22:13 functional-973657 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:22:13 functional-973657 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:22:14 functional-973657 kubelet[4802]: E1222 00:22:14.061752    4802 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 00:22:14 functional-973657 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 00:22:14 functional-973657 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 00:22:14 functional-973657 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 22 00:22:14 functional-973657 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:22:14 functional-973657 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:22:14 functional-973657 kubelet[4842]: E1222 00:22:14.798743    4842 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 00:22:14 functional-973657 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 00:22:14 functional-973657 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-973657 -n functional-973657
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-973657 -n functional-973657: exit status 6 (340.483444ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1222 00:22:15.684838 1440527 status.go:458] kubeconfig endpoint: get endpoint: "functional-973657" does not appear in /home/jenkins/minikube-integration/22179-1395000/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "functional-973657" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy (501.67s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart (368.09s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart
I1222 00:22:15.704137 1396864 config.go:182] Loaded profile config "functional-973657": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-973657 --alsologtostderr -v=8
E1222 00:23:07.826069 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-722318/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:23:35.511744 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-722318/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:26:29.153998 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/addons-984861/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:27:52.209880 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/addons-984861/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:28:07.826054 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-722318/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-973657 --alsologtostderr -v=8: exit status 80 (6m5.315469564s)

                                                
                                                
-- stdout --
	* [functional-973657] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22179
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22179-1395000/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1395000/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-973657" primary control-plane node in "functional-973657" cluster
	* Pulling base image v0.0.48-1766219634-22260 ...
	* Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.1 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1222 00:22:15.745982 1440600 out.go:360] Setting OutFile to fd 1 ...
	I1222 00:22:15.746211 1440600 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:22:15.746249 1440600 out.go:374] Setting ErrFile to fd 2...
	I1222 00:22:15.746270 1440600 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:22:15.746555 1440600 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1395000/.minikube/bin
	I1222 00:22:15.747001 1440600 out.go:368] Setting JSON to false
	I1222 00:22:15.747938 1440600 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":111889,"bootTime":1766251047,"procs":168,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1222 00:22:15.748043 1440600 start.go:143] virtualization:  
	I1222 00:22:15.753569 1440600 out.go:179] * [functional-973657] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1222 00:22:15.756598 1440600 out.go:179]   - MINIKUBE_LOCATION=22179
	I1222 00:22:15.756741 1440600 notify.go:221] Checking for updates...
	I1222 00:22:15.762722 1440600 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 00:22:15.765671 1440600 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-1395000/kubeconfig
	I1222 00:22:15.768657 1440600 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1395000/.minikube
	I1222 00:22:15.771623 1440600 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1222 00:22:15.774619 1440600 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 00:22:15.777830 1440600 config.go:182] Loaded profile config "functional-973657": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1222 00:22:15.777978 1440600 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 00:22:15.812917 1440600 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1222 00:22:15.813051 1440600 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 00:22:15.874179 1440600 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 00:22:15.864674601 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 00:22:15.874289 1440600 docker.go:319] overlay module found
	I1222 00:22:15.877302 1440600 out.go:179] * Using the docker driver based on existing profile
	I1222 00:22:15.880104 1440600 start.go:309] selected driver: docker
	I1222 00:22:15.880124 1440600 start.go:928] validating driver "docker" against &{Name:functional-973657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-973657 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableC
oreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 00:22:15.880226 1440600 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 00:22:15.880331 1440600 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 00:22:15.936346 1440600 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 00:22:15.927222796 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 00:22:15.936748 1440600 cni.go:84] Creating CNI manager for ""
	I1222 00:22:15.936818 1440600 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1222 00:22:15.936877 1440600 start.go:353] cluster config:
	{Name:functional-973657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-973657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 00:22:15.939915 1440600 out.go:179] * Starting "functional-973657" primary control-plane node in "functional-973657" cluster
	I1222 00:22:15.942690 1440600 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1222 00:22:15.945666 1440600 out.go:179] * Pulling base image v0.0.48-1766219634-22260 ...
	I1222 00:22:15.948535 1440600 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1222 00:22:15.948600 1440600 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4
	I1222 00:22:15.948615 1440600 cache.go:65] Caching tarball of preloaded images
	I1222 00:22:15.948645 1440600 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1222 00:22:15.948702 1440600 preload.go:251] Found /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1222 00:22:15.948713 1440600 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on containerd
	I1222 00:22:15.948830 1440600 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/config.json ...
	I1222 00:22:15.969249 1440600 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon, skipping pull
	I1222 00:22:15.969274 1440600 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 exists in daemon, skipping load
	I1222 00:22:15.969294 1440600 cache.go:243] Successfully downloaded all kic artifacts
	I1222 00:22:15.969326 1440600 start.go:360] acquireMachinesLock for functional-973657: {Name:mk23c5c2a3abc92310900db50002bad061b76c2b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 00:22:15.969396 1440600 start.go:364] duration metric: took 41.633µs to acquireMachinesLock for "functional-973657"
	I1222 00:22:15.969420 1440600 start.go:96] Skipping create...Using existing machine configuration
	I1222 00:22:15.969432 1440600 fix.go:54] fixHost starting: 
	I1222 00:22:15.969697 1440600 cli_runner.go:164] Run: docker container inspect functional-973657 --format={{.State.Status}}
	I1222 00:22:15.991071 1440600 fix.go:112] recreateIfNeeded on functional-973657: state=Running err=<nil>
	W1222 00:22:15.991104 1440600 fix.go:138] unexpected machine state, will restart: <nil>
	I1222 00:22:15.994289 1440600 out.go:252] * Updating the running docker "functional-973657" container ...
	I1222 00:22:15.994325 1440600 machine.go:94] provisionDockerMachine start ...
	I1222 00:22:15.994407 1440600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
	I1222 00:22:16.016696 1440600 main.go:144] libmachine: Using SSH client type: native
	I1222 00:22:16.017052 1440600 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38390 <nil> <nil>}
	I1222 00:22:16.017069 1440600 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 00:22:16.150117 1440600 main.go:144] libmachine: SSH cmd err, output: <nil>: functional-973657
	
	I1222 00:22:16.150145 1440600 ubuntu.go:182] provisioning hostname "functional-973657"
	I1222 00:22:16.150214 1440600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
	I1222 00:22:16.171110 1440600 main.go:144] libmachine: Using SSH client type: native
	I1222 00:22:16.171503 1440600 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38390 <nil> <nil>}
	I1222 00:22:16.171525 1440600 main.go:144] libmachine: About to run SSH command:
	sudo hostname functional-973657 && echo "functional-973657" | sudo tee /etc/hostname
	I1222 00:22:16.320804 1440600 main.go:144] libmachine: SSH cmd err, output: <nil>: functional-973657
	
	I1222 00:22:16.320911 1440600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
	I1222 00:22:16.341102 1440600 main.go:144] libmachine: Using SSH client type: native
	I1222 00:22:16.341468 1440600 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38390 <nil> <nil>}
	I1222 00:22:16.341492 1440600 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-973657' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-973657/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-973657' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1222 00:22:16.474666 1440600 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 00:22:16.474761 1440600 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22179-1395000/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-1395000/.minikube}
	I1222 00:22:16.474804 1440600 ubuntu.go:190] setting up certificates
	I1222 00:22:16.474823 1440600 provision.go:84] configureAuth start
	I1222 00:22:16.474894 1440600 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-973657
	I1222 00:22:16.493393 1440600 provision.go:143] copyHostCerts
	I1222 00:22:16.493439 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.pem
	I1222 00:22:16.493474 1440600 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.pem, removing ...
	I1222 00:22:16.493495 1440600 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.pem
	I1222 00:22:16.493578 1440600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.pem (1082 bytes)
	I1222 00:22:16.493680 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22179-1395000/.minikube/cert.pem
	I1222 00:22:16.493704 1440600 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1395000/.minikube/cert.pem, removing ...
	I1222 00:22:16.493715 1440600 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1395000/.minikube/cert.pem
	I1222 00:22:16.493744 1440600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-1395000/.minikube/cert.pem (1123 bytes)
	I1222 00:22:16.493808 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22179-1395000/.minikube/key.pem
	I1222 00:22:16.493831 1440600 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1395000/.minikube/key.pem, removing ...
	I1222 00:22:16.493837 1440600 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1395000/.minikube/key.pem
	I1222 00:22:16.493863 1440600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-1395000/.minikube/key.pem (1679 bytes)
	I1222 00:22:16.493929 1440600 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca-key.pem org=jenkins.functional-973657 san=[127.0.0.1 192.168.49.2 functional-973657 localhost minikube]
	I1222 00:22:16.688332 1440600 provision.go:177] copyRemoteCerts
	I1222 00:22:16.688423 1440600 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1222 00:22:16.688474 1440600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
	I1222 00:22:16.708412 1440600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38390 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/functional-973657/id_rsa Username:docker}
	I1222 00:22:16.807036 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1222 00:22:16.807104 1440600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1222 00:22:16.826203 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1222 00:22:16.826269 1440600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1222 00:22:16.844818 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1222 00:22:16.844882 1440600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1222 00:22:16.862814 1440600 provision.go:87] duration metric: took 387.965654ms to configureAuth
	I1222 00:22:16.862846 1440600 ubuntu.go:206] setting minikube options for container-runtime
	I1222 00:22:16.863040 1440600 config.go:182] Loaded profile config "functional-973657": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1222 00:22:16.863055 1440600 machine.go:97] duration metric: took 868.721817ms to provisionDockerMachine
	I1222 00:22:16.863063 1440600 start.go:293] postStartSetup for "functional-973657" (driver="docker")
	I1222 00:22:16.863075 1440600 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1222 00:22:16.863140 1440600 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1222 00:22:16.863187 1440600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
	I1222 00:22:16.881215 1440600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38390 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/functional-973657/id_rsa Username:docker}
	I1222 00:22:16.978224 1440600 ssh_runner.go:195] Run: cat /etc/os-release
	I1222 00:22:16.981674 1440600 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1222 00:22:16.981697 1440600 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1222 00:22:16.981701 1440600 command_runner.go:130] > VERSION_ID="12"
	I1222 00:22:16.981706 1440600 command_runner.go:130] > VERSION="12 (bookworm)"
	I1222 00:22:16.981711 1440600 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1222 00:22:16.981715 1440600 command_runner.go:130] > ID=debian
	I1222 00:22:16.981720 1440600 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1222 00:22:16.981726 1440600 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1222 00:22:16.981732 1440600 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1222 00:22:16.981781 1440600 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1222 00:22:16.981805 1440600 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1222 00:22:16.981817 1440600 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1395000/.minikube/addons for local assets ...
	I1222 00:22:16.981874 1440600 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1395000/.minikube/files for local assets ...
	I1222 00:22:16.981966 1440600 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem -> 13968642.pem in /etc/ssl/certs
	I1222 00:22:16.981976 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem -> /etc/ssl/certs/13968642.pem
	I1222 00:22:16.982050 1440600 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/test/nested/copy/1396864/hosts -> hosts in /etc/test/nested/copy/1396864
	I1222 00:22:16.982058 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/test/nested/copy/1396864/hosts -> /etc/test/nested/copy/1396864/hosts
	I1222 00:22:16.982135 1440600 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1396864
	I1222 00:22:16.991499 1440600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem --> /etc/ssl/certs/13968642.pem (1708 bytes)
	I1222 00:22:17.014617 1440600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/test/nested/copy/1396864/hosts --> /etc/test/nested/copy/1396864/hosts (40 bytes)
	I1222 00:22:17.034289 1440600 start.go:296] duration metric: took 171.210875ms for postStartSetup
	I1222 00:22:17.034373 1440600 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 00:22:17.034421 1440600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
	I1222 00:22:17.055784 1440600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38390 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/functional-973657/id_rsa Username:docker}
	I1222 00:22:17.151461 1440600 command_runner.go:130] > 11%
	I1222 00:22:17.151551 1440600 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1222 00:22:17.156056 1440600 command_runner.go:130] > 174G
	I1222 00:22:17.156550 1440600 fix.go:56] duration metric: took 1.187112425s for fixHost
	I1222 00:22:17.156572 1440600 start.go:83] releasing machines lock for "functional-973657", held for 1.187162091s
	I1222 00:22:17.156642 1440600 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-973657
	I1222 00:22:17.174525 1440600 ssh_runner.go:195] Run: cat /version.json
	I1222 00:22:17.174589 1440600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
	I1222 00:22:17.174652 1440600 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1222 00:22:17.174714 1440600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
	I1222 00:22:17.196471 1440600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38390 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/functional-973657/id_rsa Username:docker}
	I1222 00:22:17.199230 1440600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38390 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/functional-973657/id_rsa Username:docker}
	I1222 00:22:17.379176 1440600 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1222 00:22:17.379235 1440600 command_runner.go:130] > {"iso_version": "v1.37.0-1765965980-22186", "kicbase_version": "v0.0.48-1766219634-22260", "minikube_version": "v1.37.0", "commit": "84997fca2a3b77f8e0b5b5ebeca663f85f924cfc"}
	I1222 00:22:17.379354 1440600 ssh_runner.go:195] Run: systemctl --version
	I1222 00:22:17.385410 1440600 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1222 00:22:17.385465 1440600 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1222 00:22:17.385880 1440600 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1222 00:22:17.390276 1440600 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1222 00:22:17.390418 1440600 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1222 00:22:17.390488 1440600 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1222 00:22:17.398542 1440600 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1222 00:22:17.398570 1440600 start.go:496] detecting cgroup driver to use...
	I1222 00:22:17.398621 1440600 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 00:22:17.398692 1440600 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1222 00:22:17.414048 1440600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1222 00:22:17.427185 1440600 docker.go:218] disabling cri-docker service (if available) ...
	I1222 00:22:17.427253 1440600 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1222 00:22:17.442685 1440600 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1222 00:22:17.455696 1440600 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1222 00:22:17.577927 1440600 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1222 00:22:17.693641 1440600 docker.go:234] disabling docker service ...
	I1222 00:22:17.693740 1440600 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1222 00:22:17.714854 1440600 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1222 00:22:17.729523 1440600 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1222 00:22:17.852439 1440600 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1222 00:22:17.963077 1440600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1222 00:22:17.977041 1440600 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 00:22:17.991276 1440600 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1222 00:22:17.992369 1440600 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1222 00:22:18.003034 1440600 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1222 00:22:18.019363 1440600 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1222 00:22:18.019441 1440600 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1222 00:22:18.030259 1440600 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1222 00:22:18.041222 1440600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1222 00:22:18.051429 1440600 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1222 00:22:18.060629 1440600 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1222 00:22:18.069455 1440600 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1222 00:22:18.079294 1440600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1222 00:22:18.088607 1440600 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1222 00:22:18.097955 1440600 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1222 00:22:18.105014 1440600 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1222 00:22:18.106002 1440600 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1222 00:22:18.114147 1440600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 00:22:18.224816 1440600 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1222 00:22:18.353040 1440600 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1222 00:22:18.353118 1440600 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1222 00:22:18.356934 1440600 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I1222 00:22:18.357009 1440600 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1222 00:22:18.357030 1440600 command_runner.go:130] > Device: 0,72	Inode: 1616        Links: 1
	I1222 00:22:18.357053 1440600 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1222 00:22:18.357086 1440600 command_runner.go:130] > Access: 2025-12-22 00:22:18.309838958 +0000
	I1222 00:22:18.357111 1440600 command_runner.go:130] > Modify: 2025-12-22 00:22:18.309838958 +0000
	I1222 00:22:18.357132 1440600 command_runner.go:130] > Change: 2025-12-22 00:22:18.309838958 +0000
	I1222 00:22:18.357178 1440600 command_runner.go:130] >  Birth: -
	I1222 00:22:18.357507 1440600 start.go:564] Will wait 60s for crictl version
	I1222 00:22:18.357612 1440600 ssh_runner.go:195] Run: which crictl
	I1222 00:22:18.361021 1440600 command_runner.go:130] > /usr/local/bin/crictl
	I1222 00:22:18.361396 1440600 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1222 00:22:18.384093 1440600 command_runner.go:130] > Version:  0.1.0
	I1222 00:22:18.384169 1440600 command_runner.go:130] > RuntimeName:  containerd
	I1222 00:22:18.384205 1440600 command_runner.go:130] > RuntimeVersion:  v2.2.1
	I1222 00:22:18.384240 1440600 command_runner.go:130] > RuntimeApiVersion:  v1
	I1222 00:22:18.386573 1440600 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.1
	RuntimeApiVersion:  v1
	I1222 00:22:18.386687 1440600 ssh_runner.go:195] Run: containerd --version
	I1222 00:22:18.407693 1440600 command_runner.go:130] > containerd containerd.io v2.2.1 dea7da592f5d1d2b7755e3a161be07f43fad8f75
	I1222 00:22:18.410154 1440600 ssh_runner.go:195] Run: containerd --version
	I1222 00:22:18.429567 1440600 command_runner.go:130] > containerd containerd.io v2.2.1 dea7da592f5d1d2b7755e3a161be07f43fad8f75
	I1222 00:22:18.437868 1440600 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.1 ...
	I1222 00:22:18.440703 1440600 cli_runner.go:164] Run: docker network inspect functional-973657 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 00:22:18.457963 1440600 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1222 00:22:18.462339 1440600 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1222 00:22:18.462457 1440600 kubeadm.go:884] updating cluster {Name:functional-973657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-973657 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1222 00:22:18.462560 1440600 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1222 00:22:18.462639 1440600 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 00:22:18.493006 1440600 command_runner.go:130] > {
	I1222 00:22:18.493026 1440600 command_runner.go:130] >   "images":  [
	I1222 00:22:18.493030 1440600 command_runner.go:130] >     {
	I1222 00:22:18.493040 1440600 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1222 00:22:18.493045 1440600 command_runner.go:130] >       "repoTags":  [
	I1222 00:22:18.493051 1440600 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1222 00:22:18.493055 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.493059 1440600 command_runner.go:130] >       "repoDigests":  [
	I1222 00:22:18.493072 1440600 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1222 00:22:18.493076 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.493081 1440600 command_runner.go:130] >       "size":  "40636774",
	I1222 00:22:18.493085 1440600 command_runner.go:130] >       "username":  "",
	I1222 00:22:18.493089 1440600 command_runner.go:130] >       "pinned":  false
	I1222 00:22:18.493092 1440600 command_runner.go:130] >     },
	I1222 00:22:18.493095 1440600 command_runner.go:130] >     {
	I1222 00:22:18.493102 1440600 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1222 00:22:18.493106 1440600 command_runner.go:130] >       "repoTags":  [
	I1222 00:22:18.493112 1440600 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1222 00:22:18.493116 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.493120 1440600 command_runner.go:130] >       "repoDigests":  [
	I1222 00:22:18.493128 1440600 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1222 00:22:18.493135 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.493139 1440600 command_runner.go:130] >       "size":  "8034419",
	I1222 00:22:18.493143 1440600 command_runner.go:130] >       "username":  "",
	I1222 00:22:18.493147 1440600 command_runner.go:130] >       "pinned":  false
	I1222 00:22:18.493150 1440600 command_runner.go:130] >     },
	I1222 00:22:18.493153 1440600 command_runner.go:130] >     {
	I1222 00:22:18.493162 1440600 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1222 00:22:18.493166 1440600 command_runner.go:130] >       "repoTags":  [
	I1222 00:22:18.493171 1440600 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1222 00:22:18.493178 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.493186 1440600 command_runner.go:130] >       "repoDigests":  [
	I1222 00:22:18.493194 1440600 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1222 00:22:18.493197 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.493201 1440600 command_runner.go:130] >       "size":  "21168808",
	I1222 00:22:18.493206 1440600 command_runner.go:130] >       "username":  "nonroot",
	I1222 00:22:18.493210 1440600 command_runner.go:130] >       "pinned":  false
	I1222 00:22:18.493213 1440600 command_runner.go:130] >     },
	I1222 00:22:18.493216 1440600 command_runner.go:130] >     {
	I1222 00:22:18.493223 1440600 command_runner.go:130] >       "id":  "sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57",
	I1222 00:22:18.493227 1440600 command_runner.go:130] >       "repoTags":  [
	I1222 00:22:18.493231 1440600 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.6-0"
	I1222 00:22:18.493235 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.493238 1440600 command_runner.go:130] >       "repoDigests":  [
	I1222 00:22:18.493246 1440600 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890"
	I1222 00:22:18.493249 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.493253 1440600 command_runner.go:130] >       "size":  "21749640",
	I1222 00:22:18.493258 1440600 command_runner.go:130] >       "uid":  {
	I1222 00:22:18.493261 1440600 command_runner.go:130] >         "value":  "0"
	I1222 00:22:18.493264 1440600 command_runner.go:130] >       },
	I1222 00:22:18.493268 1440600 command_runner.go:130] >       "username":  "",
	I1222 00:22:18.493271 1440600 command_runner.go:130] >       "pinned":  false
	I1222 00:22:18.493275 1440600 command_runner.go:130] >     },
	I1222 00:22:18.493278 1440600 command_runner.go:130] >     {
	I1222 00:22:18.493285 1440600 command_runner.go:130] >       "id":  "sha256:3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54",
	I1222 00:22:18.493289 1440600 command_runner.go:130] >       "repoTags":  [
	I1222 00:22:18.493294 1440600 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-rc.1"
	I1222 00:22:18.493297 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.493300 1440600 command_runner.go:130] >       "repoDigests":  [
	I1222 00:22:18.493308 1440600 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:58367b5c0428495c0c12411fa7a018f5d40fe57307b85d8935b1ed35706ff7ee"
	I1222 00:22:18.493311 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.493316 1440600 command_runner.go:130] >       "size":  "24692223",
	I1222 00:22:18.493319 1440600 command_runner.go:130] >       "uid":  {
	I1222 00:22:18.493335 1440600 command_runner.go:130] >         "value":  "0"
	I1222 00:22:18.493338 1440600 command_runner.go:130] >       },
	I1222 00:22:18.493342 1440600 command_runner.go:130] >       "username":  "",
	I1222 00:22:18.493346 1440600 command_runner.go:130] >       "pinned":  false
	I1222 00:22:18.493349 1440600 command_runner.go:130] >     },
	I1222 00:22:18.493352 1440600 command_runner.go:130] >     {
	I1222 00:22:18.493359 1440600 command_runner.go:130] >       "id":  "sha256:a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a",
	I1222 00:22:18.493362 1440600 command_runner.go:130] >       "repoTags":  [
	I1222 00:22:18.493368 1440600 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1"
	I1222 00:22:18.493371 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.493374 1440600 command_runner.go:130] >       "repoDigests":  [
	I1222 00:22:18.493383 1440600 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:57ab0f75f58d99f4be7bff7bdda015fcbf1b7c20e58ba2722c8c39f751dc8c98"
	I1222 00:22:18.493386 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.493389 1440600 command_runner.go:130] >       "size":  "20672157",
	I1222 00:22:18.493393 1440600 command_runner.go:130] >       "uid":  {
	I1222 00:22:18.493396 1440600 command_runner.go:130] >         "value":  "0"
	I1222 00:22:18.493399 1440600 command_runner.go:130] >       },
	I1222 00:22:18.493403 1440600 command_runner.go:130] >       "username":  "",
	I1222 00:22:18.493407 1440600 command_runner.go:130] >       "pinned":  false
	I1222 00:22:18.493410 1440600 command_runner.go:130] >     },
	I1222 00:22:18.493413 1440600 command_runner.go:130] >     {
	I1222 00:22:18.493420 1440600 command_runner.go:130] >       "id":  "sha256:7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e",
	I1222 00:22:18.493423 1440600 command_runner.go:130] >       "repoTags":  [
	I1222 00:22:18.493429 1440600 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-rc.1"
	I1222 00:22:18.493432 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.493435 1440600 command_runner.go:130] >       "repoDigests":  [
	I1222 00:22:18.493443 1440600 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bdd1fa8b53558a2e1967379a36b085c93faf15581e5fa9f212baf679d89c5bb5"
	I1222 00:22:18.493446 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.493450 1440600 command_runner.go:130] >       "size":  "22432301",
	I1222 00:22:18.493454 1440600 command_runner.go:130] >       "username":  "",
	I1222 00:22:18.493457 1440600 command_runner.go:130] >       "pinned":  false
	I1222 00:22:18.493460 1440600 command_runner.go:130] >     },
	I1222 00:22:18.493464 1440600 command_runner.go:130] >     {
	I1222 00:22:18.493475 1440600 command_runner.go:130] >       "id":  "sha256:abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde",
	I1222 00:22:18.493479 1440600 command_runner.go:130] >       "repoTags":  [
	I1222 00:22:18.493484 1440600 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-rc.1"
	I1222 00:22:18.493487 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.493491 1440600 command_runner.go:130] >       "repoDigests":  [
	I1222 00:22:18.493498 1440600 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8155e3db27c7081abfc8eb5da70820cfeaf0bba7449e45360e8220e670f417d3"
	I1222 00:22:18.493501 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.493505 1440600 command_runner.go:130] >       "size":  "15405535",
	I1222 00:22:18.493509 1440600 command_runner.go:130] >       "uid":  {
	I1222 00:22:18.493512 1440600 command_runner.go:130] >         "value":  "0"
	I1222 00:22:18.493516 1440600 command_runner.go:130] >       },
	I1222 00:22:18.493519 1440600 command_runner.go:130] >       "username":  "",
	I1222 00:22:18.493523 1440600 command_runner.go:130] >       "pinned":  false
	I1222 00:22:18.493526 1440600 command_runner.go:130] >     },
	I1222 00:22:18.493529 1440600 command_runner.go:130] >     {
	I1222 00:22:18.493536 1440600 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1222 00:22:18.493539 1440600 command_runner.go:130] >       "repoTags":  [
	I1222 00:22:18.493543 1440600 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1222 00:22:18.493547 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.493550 1440600 command_runner.go:130] >       "repoDigests":  [
	I1222 00:22:18.493557 1440600 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1222 00:22:18.493560 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.493564 1440600 command_runner.go:130] >       "size":  "267939",
	I1222 00:22:18.493568 1440600 command_runner.go:130] >       "uid":  {
	I1222 00:22:18.493571 1440600 command_runner.go:130] >         "value":  "65535"
	I1222 00:22:18.493575 1440600 command_runner.go:130] >       },
	I1222 00:22:18.493579 1440600 command_runner.go:130] >       "username":  "",
	I1222 00:22:18.493582 1440600 command_runner.go:130] >       "pinned":  true
	I1222 00:22:18.493585 1440600 command_runner.go:130] >     }
	I1222 00:22:18.493588 1440600 command_runner.go:130] >   ]
	I1222 00:22:18.493591 1440600 command_runner.go:130] > }
	I1222 00:22:18.493746 1440600 containerd.go:627] all images are preloaded for containerd runtime.
	I1222 00:22:18.493754 1440600 containerd.go:534] Images already preloaded, skipping extraction
	I1222 00:22:18.493814 1440600 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 00:22:18.517780 1440600 command_runner.go:130] > {
	I1222 00:22:18.517799 1440600 command_runner.go:130] >   "images":  [
	I1222 00:22:18.517803 1440600 command_runner.go:130] >     {
	I1222 00:22:18.517813 1440600 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1222 00:22:18.517818 1440600 command_runner.go:130] >       "repoTags":  [
	I1222 00:22:18.517824 1440600 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1222 00:22:18.517827 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.517831 1440600 command_runner.go:130] >       "repoDigests":  [
	I1222 00:22:18.517839 1440600 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1222 00:22:18.517843 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.517856 1440600 command_runner.go:130] >       "size":  "40636774",
	I1222 00:22:18.517861 1440600 command_runner.go:130] >       "username":  "",
	I1222 00:22:18.517865 1440600 command_runner.go:130] >       "pinned":  false
	I1222 00:22:18.517867 1440600 command_runner.go:130] >     },
	I1222 00:22:18.517870 1440600 command_runner.go:130] >     {
	I1222 00:22:18.517878 1440600 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1222 00:22:18.517882 1440600 command_runner.go:130] >       "repoTags":  [
	I1222 00:22:18.517887 1440600 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1222 00:22:18.517890 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.517894 1440600 command_runner.go:130] >       "repoDigests":  [
	I1222 00:22:18.517902 1440600 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1222 00:22:18.517906 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.517910 1440600 command_runner.go:130] >       "size":  "8034419",
	I1222 00:22:18.517913 1440600 command_runner.go:130] >       "username":  "",
	I1222 00:22:18.517917 1440600 command_runner.go:130] >       "pinned":  false
	I1222 00:22:18.517920 1440600 command_runner.go:130] >     },
	I1222 00:22:18.517923 1440600 command_runner.go:130] >     {
	I1222 00:22:18.517930 1440600 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1222 00:22:18.517934 1440600 command_runner.go:130] >       "repoTags":  [
	I1222 00:22:18.517939 1440600 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1222 00:22:18.517942 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.517947 1440600 command_runner.go:130] >       "repoDigests":  [
	I1222 00:22:18.517955 1440600 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1222 00:22:18.517958 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.517962 1440600 command_runner.go:130] >       "size":  "21168808",
	I1222 00:22:18.517966 1440600 command_runner.go:130] >       "username":  "nonroot",
	I1222 00:22:18.517970 1440600 command_runner.go:130] >       "pinned":  false
	I1222 00:22:18.517974 1440600 command_runner.go:130] >     },
	I1222 00:22:18.517977 1440600 command_runner.go:130] >     {
	I1222 00:22:18.517983 1440600 command_runner.go:130] >       "id":  "sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57",
	I1222 00:22:18.517987 1440600 command_runner.go:130] >       "repoTags":  [
	I1222 00:22:18.517992 1440600 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.6-0"
	I1222 00:22:18.517995 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.518002 1440600 command_runner.go:130] >       "repoDigests":  [
	I1222 00:22:18.518010 1440600 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890"
	I1222 00:22:18.518013 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.518017 1440600 command_runner.go:130] >       "size":  "21749640",
	I1222 00:22:18.518022 1440600 command_runner.go:130] >       "uid":  {
	I1222 00:22:18.518026 1440600 command_runner.go:130] >         "value":  "0"
	I1222 00:22:18.518029 1440600 command_runner.go:130] >       },
	I1222 00:22:18.518033 1440600 command_runner.go:130] >       "username":  "",
	I1222 00:22:18.518037 1440600 command_runner.go:130] >       "pinned":  false
	I1222 00:22:18.518041 1440600 command_runner.go:130] >     },
	I1222 00:22:18.518043 1440600 command_runner.go:130] >     {
	I1222 00:22:18.518050 1440600 command_runner.go:130] >       "id":  "sha256:3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54",
	I1222 00:22:18.518054 1440600 command_runner.go:130] >       "repoTags":  [
	I1222 00:22:18.518059 1440600 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-rc.1"
	I1222 00:22:18.518062 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.518066 1440600 command_runner.go:130] >       "repoDigests":  [
	I1222 00:22:18.518073 1440600 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:58367b5c0428495c0c12411fa7a018f5d40fe57307b85d8935b1ed35706ff7ee"
	I1222 00:22:18.518098 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.518103 1440600 command_runner.go:130] >       "size":  "24692223",
	I1222 00:22:18.518106 1440600 command_runner.go:130] >       "uid":  {
	I1222 00:22:18.518115 1440600 command_runner.go:130] >         "value":  "0"
	I1222 00:22:18.518118 1440600 command_runner.go:130] >       },
	I1222 00:22:18.518122 1440600 command_runner.go:130] >       "username":  "",
	I1222 00:22:18.518125 1440600 command_runner.go:130] >       "pinned":  false
	I1222 00:22:18.518128 1440600 command_runner.go:130] >     },
	I1222 00:22:18.518131 1440600 command_runner.go:130] >     {
	I1222 00:22:18.518142 1440600 command_runner.go:130] >       "id":  "sha256:a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a",
	I1222 00:22:18.518146 1440600 command_runner.go:130] >       "repoTags":  [
	I1222 00:22:18.518151 1440600 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1"
	I1222 00:22:18.518155 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.518158 1440600 command_runner.go:130] >       "repoDigests":  [
	I1222 00:22:18.518166 1440600 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:57ab0f75f58d99f4be7bff7bdda015fcbf1b7c20e58ba2722c8c39f751dc8c98"
	I1222 00:22:18.518170 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.518178 1440600 command_runner.go:130] >       "size":  "20672157",
	I1222 00:22:18.518182 1440600 command_runner.go:130] >       "uid":  {
	I1222 00:22:18.518185 1440600 command_runner.go:130] >         "value":  "0"
	I1222 00:22:18.518188 1440600 command_runner.go:130] >       },
	I1222 00:22:18.518192 1440600 command_runner.go:130] >       "username":  "",
	I1222 00:22:18.518195 1440600 command_runner.go:130] >       "pinned":  false
	I1222 00:22:18.518198 1440600 command_runner.go:130] >     },
	I1222 00:22:18.518202 1440600 command_runner.go:130] >     {
	I1222 00:22:18.518209 1440600 command_runner.go:130] >       "id":  "sha256:7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e",
	I1222 00:22:18.518212 1440600 command_runner.go:130] >       "repoTags":  [
	I1222 00:22:18.518217 1440600 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-rc.1"
	I1222 00:22:18.518220 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.518224 1440600 command_runner.go:130] >       "repoDigests":  [
	I1222 00:22:18.518231 1440600 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bdd1fa8b53558a2e1967379a36b085c93faf15581e5fa9f212baf679d89c5bb5"
	I1222 00:22:18.518235 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.518239 1440600 command_runner.go:130] >       "size":  "22432301",
	I1222 00:22:18.518242 1440600 command_runner.go:130] >       "username":  "",
	I1222 00:22:18.518246 1440600 command_runner.go:130] >       "pinned":  false
	I1222 00:22:18.518249 1440600 command_runner.go:130] >     },
	I1222 00:22:18.518253 1440600 command_runner.go:130] >     {
	I1222 00:22:18.518260 1440600 command_runner.go:130] >       "id":  "sha256:abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde",
	I1222 00:22:18.518264 1440600 command_runner.go:130] >       "repoTags":  [
	I1222 00:22:18.518269 1440600 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-rc.1"
	I1222 00:22:18.518273 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.518277 1440600 command_runner.go:130] >       "repoDigests":  [
	I1222 00:22:18.518285 1440600 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8155e3db27c7081abfc8eb5da70820cfeaf0bba7449e45360e8220e670f417d3"
	I1222 00:22:18.518288 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.518292 1440600 command_runner.go:130] >       "size":  "15405535",
	I1222 00:22:18.518295 1440600 command_runner.go:130] >       "uid":  {
	I1222 00:22:18.518299 1440600 command_runner.go:130] >         "value":  "0"
	I1222 00:22:18.518302 1440600 command_runner.go:130] >       },
	I1222 00:22:18.518306 1440600 command_runner.go:130] >       "username":  "",
	I1222 00:22:18.518310 1440600 command_runner.go:130] >       "pinned":  false
	I1222 00:22:18.518318 1440600 command_runner.go:130] >     },
	I1222 00:22:18.518322 1440600 command_runner.go:130] >     {
	I1222 00:22:18.518328 1440600 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1222 00:22:18.518332 1440600 command_runner.go:130] >       "repoTags":  [
	I1222 00:22:18.518337 1440600 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1222 00:22:18.518340 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.518344 1440600 command_runner.go:130] >       "repoDigests":  [
	I1222 00:22:18.518352 1440600 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1222 00:22:18.518355 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.518358 1440600 command_runner.go:130] >       "size":  "267939",
	I1222 00:22:18.518362 1440600 command_runner.go:130] >       "uid":  {
	I1222 00:22:18.518366 1440600 command_runner.go:130] >         "value":  "65535"
	I1222 00:22:18.518371 1440600 command_runner.go:130] >       },
	I1222 00:22:18.518375 1440600 command_runner.go:130] >       "username":  "",
	I1222 00:22:18.518379 1440600 command_runner.go:130] >       "pinned":  true
	I1222 00:22:18.518388 1440600 command_runner.go:130] >     }
	I1222 00:22:18.518391 1440600 command_runner.go:130] >   ]
	I1222 00:22:18.518397 1440600 command_runner.go:130] > }
	I1222 00:22:18.524524 1440600 containerd.go:627] all images are preloaded for containerd runtime.
	I1222 00:22:18.524599 1440600 cache_images.go:86] Images are preloaded, skipping loading
	I1222 00:22:18.524620 1440600 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 containerd true true} ...
	I1222 00:22:18.524759 1440600 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-973657 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-973657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1222 00:22:18.524857 1440600 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
	I1222 00:22:18.549454 1440600 command_runner.go:130] > {
	I1222 00:22:18.549479 1440600 command_runner.go:130] >   "cniconfig": {
	I1222 00:22:18.549486 1440600 command_runner.go:130] >     "Networks": [
	I1222 00:22:18.549489 1440600 command_runner.go:130] >       {
	I1222 00:22:18.549495 1440600 command_runner.go:130] >         "Config": {
	I1222 00:22:18.549500 1440600 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I1222 00:22:18.549519 1440600 command_runner.go:130] >           "Name": "cni-loopback",
	I1222 00:22:18.549527 1440600 command_runner.go:130] >           "Plugins": [
	I1222 00:22:18.549530 1440600 command_runner.go:130] >             {
	I1222 00:22:18.549541 1440600 command_runner.go:130] >               "Network": {
	I1222 00:22:18.549546 1440600 command_runner.go:130] >                 "ipam": {},
	I1222 00:22:18.549551 1440600 command_runner.go:130] >                 "type": "loopback"
	I1222 00:22:18.549560 1440600 command_runner.go:130] >               },
	I1222 00:22:18.549566 1440600 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I1222 00:22:18.549570 1440600 command_runner.go:130] >             }
	I1222 00:22:18.549579 1440600 command_runner.go:130] >           ],
	I1222 00:22:18.549590 1440600 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I1222 00:22:18.549604 1440600 command_runner.go:130] >         },
	I1222 00:22:18.549612 1440600 command_runner.go:130] >         "IFName": "lo"
	I1222 00:22:18.549615 1440600 command_runner.go:130] >       }
	I1222 00:22:18.549619 1440600 command_runner.go:130] >     ],
	I1222 00:22:18.549626 1440600 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I1222 00:22:18.549636 1440600 command_runner.go:130] >     "PluginDirs": [
	I1222 00:22:18.549640 1440600 command_runner.go:130] >       "/opt/cni/bin"
	I1222 00:22:18.549643 1440600 command_runner.go:130] >     ],
	I1222 00:22:18.549648 1440600 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I1222 00:22:18.549656 1440600 command_runner.go:130] >     "Prefix": "eth"
	I1222 00:22:18.549667 1440600 command_runner.go:130] >   },
	I1222 00:22:18.549674 1440600 command_runner.go:130] >   "config": {
	I1222 00:22:18.549678 1440600 command_runner.go:130] >     "cdiSpecDirs": [
	I1222 00:22:18.549682 1440600 command_runner.go:130] >       "/etc/cdi",
	I1222 00:22:18.549687 1440600 command_runner.go:130] >       "/var/run/cdi"
	I1222 00:22:18.549691 1440600 command_runner.go:130] >     ],
	I1222 00:22:18.549695 1440600 command_runner.go:130] >     "cni": {
	I1222 00:22:18.549698 1440600 command_runner.go:130] >       "binDir": "",
	I1222 00:22:18.549702 1440600 command_runner.go:130] >       "binDirs": [
	I1222 00:22:18.549706 1440600 command_runner.go:130] >         "/opt/cni/bin"
	I1222 00:22:18.549709 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.549713 1440600 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I1222 00:22:18.549717 1440600 command_runner.go:130] >       "confTemplate": "",
	I1222 00:22:18.549720 1440600 command_runner.go:130] >       "ipPref": "",
	I1222 00:22:18.549728 1440600 command_runner.go:130] >       "maxConfNum": 1,
	I1222 00:22:18.549732 1440600 command_runner.go:130] >       "setupSerially": false,
	I1222 00:22:18.549739 1440600 command_runner.go:130] >       "useInternalLoopback": false
	I1222 00:22:18.549748 1440600 command_runner.go:130] >     },
	I1222 00:22:18.549754 1440600 command_runner.go:130] >     "containerd": {
	I1222 00:22:18.549759 1440600 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I1222 00:22:18.549768 1440600 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I1222 00:22:18.549773 1440600 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false,
	I1222 00:22:18.549777 1440600 command_runner.go:130] >       "runtimes": {
	I1222 00:22:18.549781 1440600 command_runner.go:130] >         "runc": {
	I1222 00:22:18.549786 1440600 command_runner.go:130] >           "ContainerAnnotations": null,
	I1222 00:22:18.549795 1440600 command_runner.go:130] >           "PodAnnotations": null,
	I1222 00:22:18.549799 1440600 command_runner.go:130] >           "baseRuntimeSpec": "",
	I1222 00:22:18.549803 1440600 command_runner.go:130] >           "cgroupWritable": false,
	I1222 00:22:18.549808 1440600 command_runner.go:130] >           "cniConfDir": "",
	I1222 00:22:18.549816 1440600 command_runner.go:130] >           "cniMaxConfNum": 0,
	I1222 00:22:18.549825 1440600 command_runner.go:130] >           "io_type": "",
	I1222 00:22:18.549829 1440600 command_runner.go:130] >           "options": {
	I1222 00:22:18.549834 1440600 command_runner.go:130] >             "BinaryName": "",
	I1222 00:22:18.549841 1440600 command_runner.go:130] >             "CriuImagePath": "",
	I1222 00:22:18.549847 1440600 command_runner.go:130] >             "CriuWorkPath": "",
	I1222 00:22:18.549851 1440600 command_runner.go:130] >             "IoGid": 0,
	I1222 00:22:18.549860 1440600 command_runner.go:130] >             "IoUid": 0,
	I1222 00:22:18.549864 1440600 command_runner.go:130] >             "NoNewKeyring": false,
	I1222 00:22:18.549869 1440600 command_runner.go:130] >             "Root": "",
	I1222 00:22:18.549874 1440600 command_runner.go:130] >             "ShimCgroup": "",
	I1222 00:22:18.549883 1440600 command_runner.go:130] >             "SystemdCgroup": false
	I1222 00:22:18.549890 1440600 command_runner.go:130] >           },
	I1222 00:22:18.549896 1440600 command_runner.go:130] >           "privileged_without_host_devices": false,
	I1222 00:22:18.549907 1440600 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I1222 00:22:18.549911 1440600 command_runner.go:130] >           "runtimePath": "",
	I1222 00:22:18.549916 1440600 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I1222 00:22:18.549920 1440600 command_runner.go:130] >           "sandboxer": "podsandbox",
	I1222 00:22:18.549924 1440600 command_runner.go:130] >           "snapshotter": ""
	I1222 00:22:18.549928 1440600 command_runner.go:130] >         }
	I1222 00:22:18.549931 1440600 command_runner.go:130] >       }
	I1222 00:22:18.549934 1440600 command_runner.go:130] >     },
	I1222 00:22:18.549944 1440600 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I1222 00:22:18.549953 1440600 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I1222 00:22:18.549961 1440600 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I1222 00:22:18.549965 1440600 command_runner.go:130] >     "disableApparmor": false,
	I1222 00:22:18.549970 1440600 command_runner.go:130] >     "disableHugetlbController": true,
	I1222 00:22:18.549978 1440600 command_runner.go:130] >     "disableProcMount": false,
	I1222 00:22:18.549983 1440600 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I1222 00:22:18.549987 1440600 command_runner.go:130] >     "enableCDI": true,
	I1222 00:22:18.549991 1440600 command_runner.go:130] >     "enableSelinux": false,
	I1222 00:22:18.549996 1440600 command_runner.go:130] >     "enableUnprivilegedICMP": true,
	I1222 00:22:18.550004 1440600 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I1222 00:22:18.550010 1440600 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I1222 00:22:18.550015 1440600 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I1222 00:22:18.550019 1440600 command_runner.go:130] >     "maxContainerLogLineSize": 16384,
	I1222 00:22:18.550024 1440600 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I1222 00:22:18.550035 1440600 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I1222 00:22:18.550046 1440600 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I1222 00:22:18.550051 1440600 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I1222 00:22:18.550059 1440600 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri",
	I1222 00:22:18.550068 1440600 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I1222 00:22:18.550072 1440600 command_runner.go:130] >     "unsetSeccompProfile": ""
	I1222 00:22:18.550165 1440600 command_runner.go:130] >   },
	I1222 00:22:18.550176 1440600 command_runner.go:130] >   "features": {
	I1222 00:22:18.550180 1440600 command_runner.go:130] >     "supplemental_groups_policy": true
	I1222 00:22:18.550184 1440600 command_runner.go:130] >   },
	I1222 00:22:18.550188 1440600 command_runner.go:130] >   "golang": "go1.24.11",
	I1222 00:22:18.550201 1440600 command_runner.go:130] >   "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1222 00:22:18.550222 1440600 command_runner.go:130] >   "lastCNILoadStatus.default": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1222 00:22:18.550231 1440600 command_runner.go:130] >   "runtimeHandlers": [
	I1222 00:22:18.550234 1440600 command_runner.go:130] >     {
	I1222 00:22:18.550238 1440600 command_runner.go:130] >       "features": {
	I1222 00:22:18.550243 1440600 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1222 00:22:18.550253 1440600 command_runner.go:130] >         "user_namespaces": true
	I1222 00:22:18.550257 1440600 command_runner.go:130] >       }
	I1222 00:22:18.550260 1440600 command_runner.go:130] >     },
	I1222 00:22:18.550264 1440600 command_runner.go:130] >     {
	I1222 00:22:18.550268 1440600 command_runner.go:130] >       "features": {
	I1222 00:22:18.550272 1440600 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1222 00:22:18.550277 1440600 command_runner.go:130] >         "user_namespaces": true
	I1222 00:22:18.550282 1440600 command_runner.go:130] >       },
	I1222 00:22:18.550286 1440600 command_runner.go:130] >       "name": "runc"
	I1222 00:22:18.550290 1440600 command_runner.go:130] >     }
	I1222 00:22:18.550293 1440600 command_runner.go:130] >   ],
	I1222 00:22:18.550296 1440600 command_runner.go:130] >   "status": {
	I1222 00:22:18.550302 1440600 command_runner.go:130] >     "conditions": [
	I1222 00:22:18.550305 1440600 command_runner.go:130] >       {
	I1222 00:22:18.550315 1440600 command_runner.go:130] >         "message": "",
	I1222 00:22:18.550319 1440600 command_runner.go:130] >         "reason": "",
	I1222 00:22:18.550327 1440600 command_runner.go:130] >         "status": true,
	I1222 00:22:18.550337 1440600 command_runner.go:130] >         "type": "RuntimeReady"
	I1222 00:22:18.550341 1440600 command_runner.go:130] >       },
	I1222 00:22:18.550344 1440600 command_runner.go:130] >       {
	I1222 00:22:18.550352 1440600 command_runner.go:130] >         "message": "Network plugin returns error: cni plugin not initialized",
	I1222 00:22:18.550360 1440600 command_runner.go:130] >         "reason": "NetworkPluginNotReady",
	I1222 00:22:18.550365 1440600 command_runner.go:130] >         "status": false,
	I1222 00:22:18.550369 1440600 command_runner.go:130] >         "type": "NetworkReady"
	I1222 00:22:18.550373 1440600 command_runner.go:130] >       },
	I1222 00:22:18.550375 1440600 command_runner.go:130] >       {
	I1222 00:22:18.550400 1440600 command_runner.go:130] >         "message": "{\"io.containerd.deprecation/cgroup-v1\":\"The support for cgroup v1 is deprecated since containerd v2.2 and will be removed by no later than May 2029. Upgrade the host to use cgroup v2.\"}",
	I1222 00:22:18.550411 1440600 command_runner.go:130] >         "reason": "ContainerdHasDeprecationWarnings",
	I1222 00:22:18.550417 1440600 command_runner.go:130] >         "status": false,
	I1222 00:22:18.550423 1440600 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings"
	I1222 00:22:18.550427 1440600 command_runner.go:130] >       }
	I1222 00:22:18.550430 1440600 command_runner.go:130] >     ]
	I1222 00:22:18.550433 1440600 command_runner.go:130] >   }
	I1222 00:22:18.550437 1440600 command_runner.go:130] > }
	I1222 00:22:18.553215 1440600 cni.go:84] Creating CNI manager for ""
	I1222 00:22:18.553243 1440600 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1222 00:22:18.553264 1440600 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1222 00:22:18.553287 1440600 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-973657 NodeName:functional-973657 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1222 00:22:18.553412 1440600 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-973657"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1222 00:22:18.553487 1440600 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1222 00:22:18.562348 1440600 command_runner.go:130] > kubeadm
	I1222 00:22:18.562392 1440600 command_runner.go:130] > kubectl
	I1222 00:22:18.562397 1440600 command_runner.go:130] > kubelet
	I1222 00:22:18.563648 1440600 binaries.go:51] Found k8s binaries, skipping transfer
	I1222 00:22:18.563729 1440600 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1222 00:22:18.571505 1440600 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1222 00:22:18.584676 1440600 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1222 00:22:18.597236 1440600 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1222 00:22:18.610841 1440600 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1222 00:22:18.614244 1440600 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1222 00:22:18.614541 1440600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 00:22:18.726610 1440600 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 00:22:19.239399 1440600 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657 for IP: 192.168.49.2
	I1222 00:22:19.239420 1440600 certs.go:195] generating shared ca certs ...
	I1222 00:22:19.239437 1440600 certs.go:227] acquiring lock for ca certs: {Name:mk4e1172e73c8d9b926824a39d7e920772302ed7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:22:19.239601 1440600 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.key
	I1222 00:22:19.239659 1440600 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.key
	I1222 00:22:19.239667 1440600 certs.go:257] generating profile certs ...
	I1222 00:22:19.239794 1440600 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/client.key
	I1222 00:22:19.239853 1440600 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/apiserver.key.ec70d081
	I1222 00:22:19.239904 1440600 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/proxy-client.key
	I1222 00:22:19.239913 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1222 00:22:19.239940 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1222 00:22:19.239954 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1222 00:22:19.239964 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1222 00:22:19.239974 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1222 00:22:19.239986 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1222 00:22:19.239996 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1222 00:22:19.240015 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1222 00:22:19.240069 1440600 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/1396864.pem (1338 bytes)
	W1222 00:22:19.240100 1440600 certs.go:480] ignoring /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/1396864_empty.pem, impossibly tiny 0 bytes
	I1222 00:22:19.240108 1440600 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca-key.pem (1675 bytes)
	I1222 00:22:19.240138 1440600 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem (1082 bytes)
	I1222 00:22:19.240165 1440600 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem (1123 bytes)
	I1222 00:22:19.240227 1440600 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/key.pem (1679 bytes)
	I1222 00:22:19.240279 1440600 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem (1708 bytes)
	I1222 00:22:19.240316 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem -> /usr/share/ca-certificates/13968642.pem
	I1222 00:22:19.240338 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:22:19.240354 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/1396864.pem -> /usr/share/ca-certificates/1396864.pem
	I1222 00:22:19.240935 1440600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1222 00:22:19.264800 1440600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1222 00:22:19.285797 1440600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1222 00:22:19.306670 1440600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1222 00:22:19.326432 1440600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1222 00:22:19.345177 1440600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1222 00:22:19.365354 1440600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1222 00:22:19.385285 1440600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1222 00:22:19.406674 1440600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem --> /usr/share/ca-certificates/13968642.pem (1708 bytes)
	I1222 00:22:19.425094 1440600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1222 00:22:19.443464 1440600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/1396864.pem --> /usr/share/ca-certificates/1396864.pem (1338 bytes)
	I1222 00:22:19.461417 1440600 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1222 00:22:19.474356 1440600 ssh_runner.go:195] Run: openssl version
	I1222 00:22:19.480426 1440600 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1222 00:22:19.480764 1440600 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/13968642.pem
	I1222 00:22:19.488508 1440600 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/13968642.pem /etc/ssl/certs/13968642.pem
	I1222 00:22:19.496491 1440600 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13968642.pem
	I1222 00:22:19.500580 1440600 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 22 00:13 /usr/share/ca-certificates/13968642.pem
	I1222 00:22:19.500632 1440600 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 00:13 /usr/share/ca-certificates/13968642.pem
	I1222 00:22:19.500692 1440600 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13968642.pem
	I1222 00:22:19.542795 1440600 command_runner.go:130] > 3ec20f2e
	I1222 00:22:19.543311 1440600 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1222 00:22:19.550778 1440600 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:22:19.558196 1440600 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1222 00:22:19.566111 1440600 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:22:19.570217 1440600 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 22 00:04 /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:22:19.570294 1440600 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 00:04 /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:22:19.570384 1440600 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:22:19.611673 1440600 command_runner.go:130] > b5213941
	I1222 00:22:19.612225 1440600 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1222 00:22:19.620704 1440600 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1396864.pem
	I1222 00:22:19.628264 1440600 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1396864.pem /etc/ssl/certs/1396864.pem
	I1222 00:22:19.635997 1440600 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1396864.pem
	I1222 00:22:19.639846 1440600 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 22 00:13 /usr/share/ca-certificates/1396864.pem
	I1222 00:22:19.640210 1440600 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 00:13 /usr/share/ca-certificates/1396864.pem
	I1222 00:22:19.640329 1440600 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1396864.pem
	I1222 00:22:19.681144 1440600 command_runner.go:130] > 51391683
	I1222 00:22:19.681670 1440600 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1222 00:22:19.689290 1440600 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 00:22:19.693035 1440600 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 00:22:19.693063 1440600 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1222 00:22:19.693070 1440600 command_runner.go:130] > Device: 259,1	Inode: 3898609     Links: 1
	I1222 00:22:19.693078 1440600 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1222 00:22:19.693115 1440600 command_runner.go:130] > Access: 2025-12-22 00:18:12.483760857 +0000
	I1222 00:22:19.693127 1440600 command_runner.go:130] > Modify: 2025-12-22 00:14:07.469855514 +0000
	I1222 00:22:19.693132 1440600 command_runner.go:130] > Change: 2025-12-22 00:14:07.469855514 +0000
	I1222 00:22:19.693137 1440600 command_runner.go:130] >  Birth: 2025-12-22 00:14:07.469855514 +0000
	I1222 00:22:19.693272 1440600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1222 00:22:19.733914 1440600 command_runner.go:130] > Certificate will not expire
	I1222 00:22:19.734424 1440600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1222 00:22:19.775247 1440600 command_runner.go:130] > Certificate will not expire
	I1222 00:22:19.775751 1440600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1222 00:22:19.816615 1440600 command_runner.go:130] > Certificate will not expire
	I1222 00:22:19.817124 1440600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1222 00:22:19.858237 1440600 command_runner.go:130] > Certificate will not expire
	I1222 00:22:19.858742 1440600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1222 00:22:19.899966 1440600 command_runner.go:130] > Certificate will not expire
	I1222 00:22:19.900073 1440600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1222 00:22:19.941050 1440600 command_runner.go:130] > Certificate will not expire
	I1222 00:22:19.941558 1440600 kubeadm.go:401] StartCluster: {Name:functional-973657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-973657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 00:22:19.941671 1440600 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1222 00:22:19.941755 1440600 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 00:22:19.969312 1440600 cri.go:96] found id: ""
	I1222 00:22:19.969401 1440600 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1222 00:22:19.976791 1440600 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1222 00:22:19.976817 1440600 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1222 00:22:19.976825 1440600 command_runner.go:130] > /var/lib/minikube/etcd:
	I1222 00:22:19.977852 1440600 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1222 00:22:19.977869 1440600 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1222 00:22:19.977970 1440600 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1222 00:22:19.987953 1440600 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1222 00:22:19.988422 1440600 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-973657" does not appear in /home/jenkins/minikube-integration/22179-1395000/kubeconfig
	I1222 00:22:19.988584 1440600 kubeconfig.go:62] /home/jenkins/minikube-integration/22179-1395000/kubeconfig needs updating (will repair): [kubeconfig missing "functional-973657" cluster setting kubeconfig missing "functional-973657" context setting]
	I1222 00:22:19.988906 1440600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/kubeconfig: {Name:mk08a9ce05fb3d2aec66c2d4776a38c1f64c42df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:22:19.989373 1440600 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22179-1395000/kubeconfig
	I1222 00:22:19.989570 1440600 kapi.go:59] client config for functional-973657: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/client.crt", KeyFile:"/home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/client.key", CAFile:"/home/jenkins/minikube-integration/22179-1395000/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2001100), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1222 00:22:19.990226 1440600 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1222 00:22:19.990386 1440600 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1222 00:22:19.990501 1440600 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1222 00:22:19.990531 1440600 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1222 00:22:19.990563 1440600 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1222 00:22:19.990584 1440600 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1222 00:22:19.990915 1440600 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1222 00:22:19.999837 1440600 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1222 00:22:19.999916 1440600 kubeadm.go:602] duration metric: took 22.040118ms to restartPrimaryControlPlane
	I1222 00:22:19.999943 1440600 kubeadm.go:403] duration metric: took 58.401328ms to StartCluster
	I1222 00:22:19.999973 1440600 settings.go:142] acquiring lock: {Name:mk10fbc51e5384d725fd46ab36048358281cb87b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:22:20.000060 1440600 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22179-1395000/kubeconfig
	I1222 00:22:20.000818 1440600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/kubeconfig: {Name:mk08a9ce05fb3d2aec66c2d4776a38c1f64c42df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:22:20.001160 1440600 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1222 00:22:20.001573 1440600 config.go:182] Loaded profile config "functional-973657": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1222 00:22:20.001632 1440600 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1222 00:22:20.001706 1440600 addons.go:70] Setting storage-provisioner=true in profile "functional-973657"
	I1222 00:22:20.001719 1440600 addons.go:239] Setting addon storage-provisioner=true in "functional-973657"
	I1222 00:22:20.001742 1440600 host.go:66] Checking if "functional-973657" exists ...
	I1222 00:22:20.002272 1440600 cli_runner.go:164] Run: docker container inspect functional-973657 --format={{.State.Status}}
	I1222 00:22:20.005335 1440600 addons.go:70] Setting default-storageclass=true in profile "functional-973657"
	I1222 00:22:20.005371 1440600 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-973657"
	I1222 00:22:20.005777 1440600 cli_runner.go:164] Run: docker container inspect functional-973657 --format={{.State.Status}}
	I1222 00:22:20.009418 1440600 out.go:179] * Verifying Kubernetes components...
	I1222 00:22:20.018228 1440600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 00:22:20.049014 1440600 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1222 00:22:20.054188 1440600 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:22:20.054214 1440600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1222 00:22:20.054285 1440600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
	I1222 00:22:20.057022 1440600 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22179-1395000/kubeconfig
	I1222 00:22:20.057199 1440600 kapi.go:59] client config for functional-973657: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/client.crt", KeyFile:"/home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/client.key", CAFile:"/home/jenkins/minikube-integration/22179-1395000/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2001100), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1222 00:22:20.057484 1440600 addons.go:239] Setting addon default-storageclass=true in "functional-973657"
	I1222 00:22:20.057515 1440600 host.go:66] Checking if "functional-973657" exists ...
	I1222 00:22:20.057932 1440600 cli_runner.go:164] Run: docker container inspect functional-973657 --format={{.State.Status}}
	I1222 00:22:20.116105 1440600 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1222 00:22:20.116126 1440600 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1222 00:22:20.116211 1440600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
	I1222 00:22:20.118476 1440600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38390 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/functional-973657/id_rsa Username:docker}
	I1222 00:22:20.150964 1440600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38390 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/functional-973657/id_rsa Username:docker}
	I1222 00:22:20.230950 1440600 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 00:22:20.246813 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:22:20.269038 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:22:20.989713 1440600 node_ready.go:35] waiting up to 6m0s for node "functional-973657" to be "Ready" ...
	I1222 00:22:20.989868 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:20.989910 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:20.989956 1440600 retry.go:84] will retry after 300ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:20.990019 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:20.990037 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:20.990158 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:20.990237 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:20.990539 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:21.220129 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:22:21.281805 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:21.285548 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:21.328766 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:22:21.389895 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:21.389951 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:21.490214 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:21.490305 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:21.490671 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:21.747162 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:22:21.762982 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:22:21.851794 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:21.851892 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:21.874934 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:21.874990 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:21.990352 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:21.990483 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:21.990846 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:22.169304 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:22:22.227981 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:22.231304 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 00:22:22.232314 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:22.293066 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:22.293113 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:22.490400 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:22.490500 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:22.490834 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:22.906334 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:22:22.975672 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:22.975713 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:22.990847 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:22.990933 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:22.991200 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:22:22.991243 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:22:23.106669 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:22:23.165342 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:23.165389 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:23.490828 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:23.490919 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:23.491242 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:23.690784 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:22:23.756600 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:23.760540 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:23.989979 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:23.990075 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:23.990401 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:24.489993 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:24.490100 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:24.490454 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:24.698734 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:22:24.769684 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:24.773516 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:24.989931 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:24.990002 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:24.990372 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:22:24.991642 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:22:25.485320 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:22:25.490209 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:25.490301 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:25.490614 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:25.576354 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:25.576402 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:25.989992 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:25.990101 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:25.990409 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:26.023839 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:22:26.088004 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:26.088050 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:26.490597 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:26.490669 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:26.491019 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:26.990635 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:26.990716 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:26.991074 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:27.490758 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:27.490828 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:27.491160 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:22:27.491213 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:22:27.990564 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:27.990642 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:27.991013 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:28.490658 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:28.490747 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:28.491022 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:28.831344 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:22:28.887027 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:28.890561 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:28.990850 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:28.990934 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:28.991236 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:29.310761 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:22:29.372391 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:29.372466 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:29.490719 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:29.490793 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:29.491132 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:29.989857 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:29.989931 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:29.990237 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:22:29.990280 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:22:30.489971 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:30.490047 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:30.490418 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:30.990341 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:30.990414 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:30.990750 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:31.490503 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:31.490609 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:31.490891 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:31.990673 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:31.990771 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:31.991094 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:22:31.991143 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:22:32.490784 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:32.490857 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:32.491147 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:32.990889 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:32.990957 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:32.991275 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:33.490908 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:33.490983 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:33.491308 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:33.989973 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:33.990048 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:33.990429 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:34.489922 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:34.490003 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:34.490315 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:22:34.490363 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:22:34.729902 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:22:34.785155 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:34.788865 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:34.788903 1440600 retry.go:84] will retry after 5s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:34.990007 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:34.990103 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:34.990461 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:35.490036 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:35.490130 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:35.490475 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:35.603941 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:22:35.664634 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:35.664674 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:35.990278 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:35.990353 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:35.990620 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:36.489989 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:36.490117 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:36.490457 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:22:36.490508 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:22:36.990183 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:36.990309 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:36.990632 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:37.490175 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:37.490265 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:37.490582 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:37.990291 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:37.990369 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:37.990755 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:38.490593 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:38.490669 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:38.491023 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:22:38.491077 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:22:38.990843 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:38.990915 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:38.991302 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:39.489984 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:39.490057 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:39.490378 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:39.827913 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:22:39.886956 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:39.887007 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:39.887042 1440600 retry.go:84] will retry after 5.9s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:39.990214 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:39.990290 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:39.990608 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:40.490186 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:40.490272 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:40.490611 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:40.989992 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:40.990105 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:40.990426 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:22:40.990478 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:22:41.489998 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:41.490103 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:41.490430 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:41.990002 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:41.990092 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:41.990399 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:42.490105 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:42.490198 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:42.490519 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:42.989979 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:42.990061 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:42.990415 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:43.490003 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:43.490101 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:43.490417 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:22:43.490468 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:22:43.989976 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:43.990053 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:43.990428 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:44.490009 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:44.490103 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:44.490462 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:44.689826 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:22:44.747699 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:44.751320 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:44.751358 1440600 retry.go:84] will retry after 11.8s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:44.990747 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:44.990828 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:44.991101 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:45.490873 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:45.491004 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:45.491354 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:22:45.491411 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:22:45.742662 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:22:45.802582 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:45.802622 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:45.990273 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:45.990345 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:45.990615 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:46.489977 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:46.490053 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:46.490376 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:46.990196 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:46.990269 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:46.990588 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:47.490213 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:47.490295 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:47.490626 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:47.990032 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:47.990136 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:47.990574 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:22:47.990636 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:22:48.490291 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:48.490369 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:48.490704 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:48.990450 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:48.990547 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:48.990893 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:49.490743 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:49.490839 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:49.491164 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:49.989920 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:49.990005 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:49.990408 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:50.490031 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:50.490126 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:50.490389 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:22:50.490443 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:22:50.990489 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:50.990566 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:50.990936 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:51.490631 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:51.490764 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:51.491053 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:51.989876 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:51.989962 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:51.990268 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:52.489968 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:52.490042 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:52.490371 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:52.990110 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:52.990196 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:52.990515 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:22:52.990572 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:22:53.490196 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:53.490268 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:53.490563 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:53.990000 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:53.990092 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:53.990415 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:54.490001 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:54.490098 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:54.490420 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:54.989928 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:54.990024 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:54.990327 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:55.234826 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:22:55.295545 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:55.295592 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:55.295617 1440600 retry.go:84] will retry after 23.4s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:55.490907 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:55.490991 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:55.491326 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:22:55.491397 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:22:55.990270 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:55.990351 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:55.990713 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:56.490418 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:56.490484 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:56.490747 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:56.590202 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:22:56.649053 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:56.649099 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:56.990592 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:56.990671 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:56.991011 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:57.490852 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:57.490928 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:57.491243 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:57.989908 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:57.990004 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:57.990332 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:22:57.990391 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:22:58.489983 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:58.490098 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:58.490492 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:58.990013 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:58.990106 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:58.990483 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:59.490202 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:59.490276 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:59.490574 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:59.989956 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:59.990059 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:59.990371 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:22:59.990417 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:00.489992 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:00.490099 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:00.490433 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:00.990353 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:00.990422 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:00.990690 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:01.490534 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:01.490615 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:01.490968 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:01.990810 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:01.990893 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:01.991247 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:01.991307 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:02.489934 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:02.490020 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:02.490365 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:02.989981 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:02.990056 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:02.990424 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:03.490057 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:03.490160 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:03.490510 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:03.990186 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:03.990261 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:03.990567 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:04.489961 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:04.490056 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:04.490388 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:04.490445 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:04.989990 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:04.990063 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:04.990444 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:05.490618 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:05.490690 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:05.491034 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:05.990715 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:05.990804 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:05.991174 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:06.490848 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:06.490927 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:06.491264 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:06.491323 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:06.989959 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:06.990038 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:06.990349 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:07.490001 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:07.490094 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:07.490420 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:07.990138 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:07.990214 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:07.990557 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:08.490207 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:08.490297 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:08.490641 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:08.990354 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:08.990451 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:08.990812 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:08.990863 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:09.490648 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:09.490727 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:09.491063 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:09.990673 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:09.990741 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:09.991016 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:10.490839 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:10.490917 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:10.491255 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:10.990113 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:10.990187 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:10.990542 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:11.490194 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:11.490275 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:11.490617 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:11.490670 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:11.989985 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:11.990065 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:11.990445 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:12.490175 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:12.490250 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:12.490589 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:12.990181 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:12.990252 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:12.990513 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:13.489972 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:13.490070 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:13.490414 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:13.989959 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:13.990036 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:13.990393 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:13.990452 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:14.490128 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:14.490203 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:14.490524 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:14.989972 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:14.990055 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:14.990413 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:15.490003 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:15.490098 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:15.490439 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:15.990388 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:15.990464 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:15.990804 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:15.990869 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:16.088253 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:23:16.150952 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:23:16.151001 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:23:16.151025 1440600 retry.go:84] will retry after 12s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:23:16.490464 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:16.490538 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:16.490881 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:16.990721 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:16.990797 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:16.991127 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:17.489899 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:17.489969 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:17.490299 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:17.990075 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:17.990174 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:17.990513 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:18.489977 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:18.490103 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:18.490446 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:18.490500 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:18.654775 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:23:18.717545 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:23:18.717590 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:23:18.989890 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:18.989961 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:18.990331 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:19.490056 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:19.490166 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:19.490474 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:19.989986 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:19.990059 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:19.990413 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:20.489971 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:20.490043 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:20.490401 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:20.990442 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:20.990522 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:20.990905 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:20.990965 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:21.490561 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:21.490647 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:21.491034 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:21.990814 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:21.990880 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:21.991151 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:22.489859 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:22.489933 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:22.490316 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:22.989907 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:22.990010 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:22.990373 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:23.489920 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:23.489988 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:23.490275 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:23.490318 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:23.990022 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:23.990126 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:23.990454 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:24.489985 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:24.490058 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:24.490398 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:24.990113 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:24.990189 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:24.990527 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:25.489940 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:25.490018 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:25.490368 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:25.490426 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:25.989959 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:25.990037 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:25.990391 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:26.489963 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:26.490033 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:26.490358 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:26.989983 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:26.990062 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:26.990471 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:27.490073 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:27.490176 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:27.490476 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:27.490523 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:27.990208 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:27.990284 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:27.990581 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:28.161122 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:23:28.220514 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:23:28.224335 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:23:28.224393 1440600 retry.go:84] will retry after 41.4s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:23:28.490932 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:28.491004 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:28.491336 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:28.989906 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:28.989984 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:28.990321 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:29.490024 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:29.490113 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:29.490458 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:29.989986 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:29.990094 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:29.990474 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:29.990557 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:30.490047 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:30.490143 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:30.490458 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:30.990259 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:30.990329 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:30.990655 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:31.490002 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:31.490100 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:31.490411 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:31.990014 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:31.990115 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:31.990469 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:32.490018 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:32.490108 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:32.490375 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:32.490418 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:32.990098 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:32.990177 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:32.990500 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:33.490003 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:33.490108 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:33.490466 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:33.990154 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:33.990234 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:33.990566 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:34.490002 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:34.490090 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:34.490440 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:34.490501 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:34.990013 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:34.990107 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:34.990397 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:35.490066 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:35.490157 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:35.490467 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:35.990431 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:35.990505 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:35.990824 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:36.490453 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:36.490528 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:36.490835 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:36.490884 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:36.990603 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:36.990678 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:36.990954 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:37.490735 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:37.490807 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:37.491181 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:37.990857 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:37.990929 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:37.991230 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:38.489948 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:38.490019 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:38.490349 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:38.990008 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:38.990105 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:38.990436 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:38.990495 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:39.489987 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:39.490063 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:39.490393 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:39.989944 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:39.990040 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:39.990363 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:40.489956 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:40.490045 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:40.490412 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:40.990475 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:40.990549 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:40.990889 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:40.990950 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:41.490672 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:41.490741 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:41.491008 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:41.990780 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:41.990856 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:41.991209 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:42.490871 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:42.490954 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:42.491340 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:42.990007 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:42.990097 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:42.990404 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:43.489956 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:43.490032 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:43.490391 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:43.490443 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:43.989997 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:43.990069 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:43.990424 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:44.490113 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:44.490184 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:44.490504 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:44.989987 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:44.990072 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:44.990468 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:45.490042 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:45.490139 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:45.490488 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:45.490544 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:45.990486 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:45.990563 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:45.990841 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:46.490648 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:46.490719 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:46.491036 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:46.990855 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:46.990935 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:46.991321 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:47.489981 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:47.490047 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:47.490334 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:47.990019 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:47.990115 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:47.990452 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:47.990507 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:48.490178 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:48.490257 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:48.490596 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:48.990176 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:48.990258 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:48.990544 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:49.490811 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:49.490901 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:49.491279 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:49.989874 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:49.989946 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:49.990300 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:50.489924 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:50.490000 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:50.490343 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:50.490397 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:50.990356 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:50.990437 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:50.990752 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:51.490553 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:51.490629 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:51.490975 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:51.990784 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:51.990866 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:51.991140 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:52.489871 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:52.489953 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:52.490301 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:52.989984 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:52.990058 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:52.990419 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:52.990479 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:53.490129 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:53.490202 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:53.490518 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:53.990187 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:53.990262 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:53.990609 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:54.489980 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:54.490055 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:54.490439 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:54.989995 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:54.990072 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:54.990372 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:55.490063 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:55.490153 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:55.490516 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:55.490572 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:55.990452 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:55.990534 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:55.990878 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:56.490649 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:56.490717 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:56.490982 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:56.990754 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:56.990838 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:56.991192 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:57.489902 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:57.489983 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:57.490337 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:57.990010 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:57.990101 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:57.990401 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:57.990458 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:58.490135 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:58.490219 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:58.490572 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:58.990291 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:58.990363 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:58.990713 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:59.490474 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:59.490546 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:59.490809 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:59.990637 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:59.990713 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:59.991064 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:59.991122 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:00.490316 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:00.490400 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:00.490862 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:00.990668 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:00.990739 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:00.991087 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:01.490883 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:01.490958 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:01.491270 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:01.989948 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:01.990026 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:01.990325 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:02.490014 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:02.490103 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:02.490477 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:02.490534 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:02.990031 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:02.990131 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:02.990497 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:03.490184 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:03.490262 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:03.490600 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:03.989956 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:03.990032 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:03.990406 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:04.489994 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:04.490072 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:04.490456 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:04.989918 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:04.989996 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:04.990341 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:04.990396 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:05.490050 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:05.490143 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:05.490511 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:05.990488 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:05.990562 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:05.990914 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:06.490753 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:06.490832 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:06.491115 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:06.560484 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:24:06.618784 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:24:06.622419 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:24:06.622526 1440600 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1222 00:24:06.989983 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:06.990064 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:06.990383 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:06.990433 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:07.490157 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:07.490233 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:07.490524 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:07.990185 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:07.990270 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:07.990599 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:08.490022 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:08.490115 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:08.490453 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:08.989981 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:08.990060 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:08.990411 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:08.990466 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:09.490075 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:09.490168 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:09.490514 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:09.609944 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:24:09.674734 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:24:09.674775 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:24:09.674856 1440600 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1222 00:24:09.678207 1440600 out.go:179] * Enabled addons: 
	I1222 00:24:09.681672 1440600 addons.go:530] duration metric: took 1m49.680036347s for enable addons: enabled=[]
	I1222 00:24:09.990006 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:09.990111 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:09.990435 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:10.489989 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:10.490125 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:10.490453 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:10.990344 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:10.990411 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:10.990682 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:10.990727 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:11.490593 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:11.490672 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:11.491056 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:11.990903 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:11.990982 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:11.991278 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:12.489936 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:12.490027 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:12.490339 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:12.990005 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:12.990116 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:12.990431 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:13.489991 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:13.490102 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:13.490441 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:13.490498 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:13.989928 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:13.990002 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:13.990306 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:14.489924 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:14.490000 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:14.490370 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:14.989952 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:14.990029 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:14.990377 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:15.489930 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:15.490007 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:15.495140 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	W1222 00:24:15.495205 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:15.990139 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:15.990214 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:15.990548 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:16.490265 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:16.490341 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:16.490685 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:16.990466 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:16.990534 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:16.990810 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:17.490605 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:17.490678 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:17.491024 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:17.990809 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:17.990888 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:17.991232 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:17.991290 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:18.489966 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:18.490047 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:18.490344 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:18.989974 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:18.990048 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:18.990413 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:19.490152 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:19.490235 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:19.490572 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:19.990193 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:19.990267 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:19.990595 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:20.490013 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:20.490112 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:20.490466 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:20.490529 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:20.990365 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:20.990452 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:20.990874 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:21.490647 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:21.490716 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:21.490974 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:21.990817 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:21.990890 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:21.991258 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:22.489990 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:22.490065 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:22.490412 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:22.989931 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:22.990001 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:22.990291 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:22.990334 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:23.489982 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:23.490072 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:23.490434 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:23.990185 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:23.990260 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:23.990618 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:24.490196 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:24.490263 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:24.490567 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:24.990006 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:24.990100 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:24.990427 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:24.990485 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:25.490302 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:25.490387 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:25.490733 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:25.990623 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:25.990702 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:25.990981 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:26.490787 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:26.490872 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:26.491243 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:26.989906 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:26.989979 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:26.990394 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:27.490103 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:27.490176 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:27.490443 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:27.490482 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:27.989960 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:27.990034 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:27.990413 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:28.490207 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:28.490281 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:28.490622 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:28.990191 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:28.990282 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:28.990631 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:29.490018 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:29.490117 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:29.490494 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:29.490550 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:29.990018 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:29.990122 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:29.990442 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:30.490142 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:30.490217 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:30.490543 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:30.990537 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:30.990608 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:30.990938 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:31.490581 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:31.490653 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:31.490983 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:31.491033 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:31.990778 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:31.990859 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:31.991138 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:32.489907 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:32.489978 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:32.490318 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:32.990007 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:32.990097 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:32.990421 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:33.490061 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:33.490158 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:33.490434 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:33.989970 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:33.990046 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:33.990432 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:33.990495 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:34.490195 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:34.490327 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:34.490668 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:34.990187 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:34.990257 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:34.990639 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:35.490386 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:35.490458 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:35.490812 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:35.990390 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:35.990464 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:35.990796 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:35.990852 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:36.490573 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:36.490651 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:36.490929 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:36.990756 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:36.990829 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:36.991171 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:37.489889 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:37.489967 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:37.490339 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:37.990024 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:37.990114 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:37.990447 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:38.489982 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:38.490058 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:38.490429 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:38.490484 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:38.990183 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:38.990264 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:38.990605 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:39.490191 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:39.490269 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:39.490588 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:39.990227 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:39.990305 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:39.990674 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:40.490443 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:40.490519 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:40.490858 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:40.490915 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:40.990684 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:40.990753 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:40.991032 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:41.490794 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:41.490870 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:41.491216 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:41.989955 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:41.990032 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:41.990370 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:42.489956 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:42.490024 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:42.490310 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:42.989997 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:42.990074 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:42.990458 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:42.990518 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:43.489986 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:43.490062 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:43.490435 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:43.990126 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:43.990202 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:43.990538 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:44.489985 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:44.490061 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:44.490436 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:44.990151 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:44.990227 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:44.990551 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:44.990601 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:45.490191 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:45.490261 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:45.490538 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:45.990641 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:45.990724 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:45.991072 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:46.491707 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:46.491786 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:46.492142 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:46.989879 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:46.989948 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:46.990262 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:47.489938 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:47.490019 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:47.490372 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:47.490436 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:47.989976 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:47.990058 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:47.990414 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:48.490104 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:48.490184 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:48.490506 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:48.989980 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:48.990052 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:48.990405 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:49.490140 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:49.490223 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:49.490576 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:49.490637 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:49.990206 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:49.990276 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:49.990555 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:50.489985 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:50.490065 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:50.490434 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:50.990242 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:50.990326 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:50.990658 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:51.490186 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:51.490262 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:51.490593 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:51.989990 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:51.990069 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:51.990435 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:51.990489 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:52.490183 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:52.490256 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:52.490603 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:52.990255 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:52.990324 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:52.990635 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:53.489964 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:53.490036 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:53.490363 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:53.990001 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:53.990075 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:53.990423 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:54.489924 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:54.489996 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:54.490285 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:54.490333 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:54.989975 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:54.990055 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:54.990468 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:55.490011 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:55.490103 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:55.490421 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:55.990314 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:55.990389 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:55.990657 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:56.490486 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:56.490557 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:56.490869 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:56.490916 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:56.990725 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:56.990802 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:56.991133 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:57.490900 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:57.490974 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:57.491290 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:57.989972 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:57.990051 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:57.990419 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:58.490136 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:58.490217 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:58.490543 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:58.990192 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:58.990263 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:58.990550 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:58.990605 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:59.490266 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:59.490351 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:59.490696 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:59.990527 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:59.990604 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:59.990950 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:00.490941 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:00.491023 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:00.491350 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:00.990417 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:00.990491 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:00.990807 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:00.990855 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:01.490637 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:01.490718 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:01.491102 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:01.990857 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:01.990933 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:01.991226 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:02.489934 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:02.490011 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:02.490376 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:02.990107 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:02.990182 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:02.990528 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:03.490191 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:03.490258 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:03.490603 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:03.490666 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:03.990323 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:03.990398 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:03.990774 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:04.490099 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:04.490181 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:04.490541 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:04.990232 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:04.990304 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:04.990598 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:05.489984 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:05.490070 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:05.490474 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:05.990399 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:05.990480 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:05.990832 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:05.990887 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:06.490635 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:06.490717 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:06.491014 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:06.990839 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:06.990944 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:06.991373 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:07.490014 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:07.490104 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:07.490446 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:07.990134 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:07.990205 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:07.990514 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:08.489962 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:08.490044 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:08.490424 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:08.490484 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:08.990161 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:08.990242 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:08.990563 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:09.490209 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:09.490287 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:09.490623 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:09.989971 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:09.990050 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:09.990419 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:10.490169 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:10.490256 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:10.490641 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:10.490691 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:10.990484 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:10.990556 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:10.990880 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:11.490691 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:11.490770 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:11.491148 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:11.989909 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:11.989998 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:11.990370 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:12.490057 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:12.490147 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:12.490467 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:12.990174 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:12.990252 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:12.990598 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:12.990662 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:13.490189 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:13.490279 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:13.490632 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:13.990187 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:13.990261 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:13.990530 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:14.490218 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:14.490295 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:14.490648 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:14.990225 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:14.990310 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:14.990701 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:14.990757 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:15.490513 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:15.490584 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:15.490919 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:15.990746 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:15.990844 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:15.991183 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:16.489918 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:16.490001 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:16.490332 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:16.989935 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:16.990006 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:16.990305 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:17.489916 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:17.489993 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:17.490353 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:17.490411 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:17.989982 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:17.990067 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:17.990440 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:18.489992 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:18.490059 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:18.490344 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:18.989986 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:18.990062 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:18.990446 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:19.489988 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:19.490067 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:19.490439 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:19.490499 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:19.990000 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:19.990097 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:19.990384 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:20.490061 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:20.490152 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:20.490473 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:20.990446 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:20.990524 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:20.990901 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:21.490562 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:21.490638 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:21.490935 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:21.490983 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:21.990766 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:21.990844 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:21.991203 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:22.489952 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:22.490028 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:22.490383 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:22.989946 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:22.990018 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:22.990395 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:23.490115 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:23.490193 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:23.490519 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:23.990250 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:23.990328 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:23.990658 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:23.990712 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:24.490191 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:24.490259 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:24.490564 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:24.990067 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:24.990171 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:24.990519 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:25.490112 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:25.490189 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:25.490544 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:25.990337 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:25.990408 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:25.990679 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:26.490001 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:26.490100 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:26.490493 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:26.490549 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:26.989985 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:26.990060 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:26.990450 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:27.489933 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:27.490004 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:27.490303 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:27.989990 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:27.990064 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:27.990419 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:28.489996 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:28.490100 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:28.490409 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:28.989926 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:28.990004 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:28.990293 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:28.990335 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:29.489955 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:29.490038 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:29.490419 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:29.989986 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:29.990064 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:29.990428 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:30.490113 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:30.490182 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:30.490466 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:30.990514 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:30.990608 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:30.990968 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:30.991038 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:31.490801 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:31.490876 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:31.491238 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:31.989938 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:31.990010 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:31.990319 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:32.490105 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:32.490200 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:32.490583 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:32.990011 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:32.990107 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:32.990457 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:33.489930 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:33.490001 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:33.490356 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:33.490407 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:33.989968 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:33.990049 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:33.990398 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:34.490108 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:34.490195 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:34.490523 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:34.989925 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:34.990023 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:34.990366 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:35.489971 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:35.490049 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:35.490369 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:35.990125 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:35.990203 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:35.990545 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:35.990599 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:36.490187 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:36.490257 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:36.490569 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:36.990016 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:36.990105 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:36.990430 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:37.490182 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:37.490252 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:37.490574 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:37.990191 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:37.990266 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:37.990607 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:37.990658 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:38.490325 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:38.490406 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:38.490727 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:38.990379 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:38.990457 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:38.990798 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:39.490205 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:39.490279 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:39.490564 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:39.989982 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:39.990073 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:39.990463 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:40.489978 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:40.490060 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:40.490394 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:40.490452 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:40.990349 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:40.990422 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:40.990727 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:41.490014 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:41.490108 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:41.490471 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:41.989961 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:41.990033 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:41.990356 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:42.489960 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:42.490027 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:42.490300 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:42.990007 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:42.990099 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:42.990440 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:42.990502 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:43.490037 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:43.490130 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:43.490432 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:43.989985 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:43.990105 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:43.990442 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:44.489978 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:44.490053 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:44.490424 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:44.990010 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:44.990121 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:44.990473 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:44.990528 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:45.490187 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:45.490258 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:45.490577 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:45.990696 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:45.990768 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:45.991105 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:46.490888 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:46.490961 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:46.491348 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:46.989888 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:46.989967 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:46.990307 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:47.489963 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:47.490035 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:47.490377 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:47.490435 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:47.990002 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:47.990111 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:47.990456 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:48.490130 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:48.490210 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:48.490499 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:48.990197 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:48.990271 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:48.990616 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:49.490301 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:49.490378 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:49.490708 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:49.490767 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:49.990515 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:49.990590 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:49.990888 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:50.490679 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:50.490750 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:50.491091 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:50.990830 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:50.990903 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:50.991524 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:51.490235 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:51.490311 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:51.490637 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:51.990347 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:51.990422 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:51.990726 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:51.990776 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:52.490531 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:52.490608 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:52.490905 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:52.990690 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:52.990761 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:52.991011 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:53.490818 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:53.490896 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:53.491226 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:53.989948 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:53.990031 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:53.990354 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:54.489938 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:54.490015 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:54.490377 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:54.490441 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:54.990011 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:54.990111 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:54.990418 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:55.489958 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:55.490031 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:55.490422 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:55.990385 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:55.990459 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:55.990735 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:56.490568 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:56.490669 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:56.490961 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:56.491006 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:56.990756 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:56.990829 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:56.991170 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:57.489861 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:57.489931 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:57.490260 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:57.989963 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:57.990042 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:57.990407 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:58.490137 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:58.490214 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:58.490541 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:58.990214 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:58.990306 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:58.990624 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:58.990667 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:59.489984 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:59.490062 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:59.490433 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:59.990163 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:59.990236 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:59.990615 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:00.490206 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:00.490289 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:00.490597 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:00.990658 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:00.990731 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:00.991092 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:00.991152 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:01.490884 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:01.490964 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:01.491316 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:01.990041 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:01.990133 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:01.990455 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:02.490830 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:02.490906 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:02.491270 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:02.989994 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:02.990098 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:02.990430 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:03.489931 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:03.490002 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:03.490337 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:03.490390 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:03.990024 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:03.990121 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:03.990467 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:04.490179 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:04.490255 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:04.490608 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:04.990184 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:04.990260 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:04.990581 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:05.489977 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:05.490051 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:05.490398 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:05.490456 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:05.990446 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:05.990523 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:05.990849 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:06.490609 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:06.490678 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:06.490954 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:06.990809 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:06.990889 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:06.991238 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:07.489962 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:07.490037 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:07.490389 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:07.989985 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:07.990051 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:07.990334 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:07.990374 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:08.490043 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:08.490140 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:08.490496 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:08.989967 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:08.990041 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:08.990388 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:09.490116 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:09.490235 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:09.490498 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:09.990254 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:09.990339 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:09.990808 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:09.990886 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:10.490650 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:10.490731 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:10.491042 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:10.990825 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:10.990904 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:10.991173 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:11.489852 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:11.489924 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:11.490252 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:11.989997 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:11.990073 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:11.990444 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:12.490161 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:12.490230 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:12.490508 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:12.490550 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:12.989975 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:12.990050 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:12.990399 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:13.489977 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:13.490061 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:13.490418 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:13.989935 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:13.990009 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:13.990308 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:14.490050 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:14.490144 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:14.490473 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:14.989961 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:14.990046 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:14.990441 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:14.990497 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:15.490170 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:15.490244 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:15.490586 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:15.990615 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:15.990697 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:15.991007 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:16.490796 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:16.490870 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:16.491907 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:16.990669 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:16.990740 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:16.991032 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:16.991078 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:17.490833 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:17.490913 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:17.491260 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:17.989966 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:17.990045 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:17.990373 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:18.489932 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:18.490002 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:18.490301 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:18.989992 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:18.990073 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:18.990433 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:19.490149 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:19.490224 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:19.490593 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:19.490656 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:19.990226 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:19.990300 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:19.990617 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:20.489972 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:20.490047 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:20.490362 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:20.990101 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:20.990177 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:20.990509 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:21.490185 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:21.490264 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:21.490595 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:21.989954 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:21.990032 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:21.990395 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:21.990455 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:22.489962 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:22.490045 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:22.490385 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:22.989930 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:22.990003 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:22.990341 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:23.490027 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:23.490129 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:23.490477 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:23.990190 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:23.990277 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:23.990691 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:23.990747 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:24.490514 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:24.490583 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:24.490927 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:24.990720 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:24.990794 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:24.991140 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:25.490900 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:25.490979 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:25.491379 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:25.990094 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:25.990175 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:25.990521 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:26.490373 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:26.490449 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:26.490797 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:26.490855 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:26.990580 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:26.990656 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:26.991034 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:27.490724 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:27.490790 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:27.491046 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:27.990825 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:27.990904 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:27.991259 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:28.490911 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:28.490985 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:28.491318 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:28.491372 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:28.989928 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:28.990007 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:28.990342 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:29.489980 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:29.490066 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:29.490444 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:29.989982 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:29.990058 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:29.990411 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:30.490133 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:30.490213 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:30.490478 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:30.990413 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:30.990487 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:30.990822 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:30.990881 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:31.490635 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:31.490709 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:31.491051 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:31.990823 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:31.990893 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:31.991165 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:32.490952 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:32.491029 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:32.491373 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:32.989955 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:32.990035 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:32.990378 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:33.489931 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:33.490002 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:33.490316 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:33.490362 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:33.989947 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:33.990021 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:33.990372 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:34.490116 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:34.490197 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:34.490500 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:34.989909 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:34.989977 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:34.990263 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:35.489946 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:35.490026 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:35.490341 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:35.490389 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:35.990283 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:35.990360 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:35.990662 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:36.490184 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:36.490280 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:36.490578 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:36.990001 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:36.990095 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:36.990429 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:37.490145 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:37.490220 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:37.490554 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:37.490609 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:37.990015 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:37.990097 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:37.990389 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:38.490110 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:38.490185 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:38.490492 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:38.990019 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:38.990118 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:38.990456 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:39.490153 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:39.490226 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:39.490496 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:39.990199 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:39.990276 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:39.990657 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:39.990715 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:40.490392 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:40.490536 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:40.490920 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:40.990760 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:40.990841 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:40.991131 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:41.490923 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:41.490995 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:41.491302 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:41.990033 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:41.990133 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:41.990472 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:42.490153 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:42.490221 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:42.490485 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:42.490527 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:42.990002 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:42.990101 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:42.990413 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:43.489980 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:43.490064 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:43.490432 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:43.989938 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:43.990011 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:43.990364 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:44.489959 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:44.490042 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:44.490416 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:44.989995 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:44.990102 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:44.990462 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:44.990519 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:45.490179 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:45.490260 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:45.490574 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:45.990602 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:45.990682 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:45.991037 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:46.490835 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:46.490909 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:46.491279 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:46.989941 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:46.990018 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:46.990385 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:47.489947 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:47.490027 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:47.490374 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:47.490435 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:47.989971 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:47.990053 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:47.990422 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:48.490143 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:48.490233 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:48.490499 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:48.990174 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:48.990256 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:48.990580 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:49.490307 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:49.490383 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:49.490720 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:49.490772 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:49.990521 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:49.990599 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:49.990879 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:50.490635 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:50.490716 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:50.491079 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:50.990724 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:50.990795 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:50.991088 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:51.490793 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:51.490872 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:51.491153 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:51.491198 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:51.989975 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:51.990061 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:51.990446 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:52.489977 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:52.490057 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:52.490428 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:52.990118 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:52.990246 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:52.990561 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:53.489966 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:53.490052 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:53.490415 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:53.989985 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:53.990063 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:53.990421 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:53.990481 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:54.489936 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:54.490008 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:54.490340 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:54.989971 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:54.990045 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:54.990421 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:55.490133 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:55.490213 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:55.490553 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:55.990361 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:55.990433 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:55.990726 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:55.990770 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:56.490522 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:56.490596 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:56.490941 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:56.990624 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:56.990700 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:56.991017 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:57.490621 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:57.490692 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:57.490956 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:57.990832 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:57.990908 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:57.991282 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:57.991347 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:58.489976 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:58.490053 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:58.490386 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:58.989930 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:58.989998 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:58.990284 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:59.489983 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:59.490073 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:59.490464 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:59.990192 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:59.990275 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:59.990617 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:00.490281 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:00.490364 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:00.490677 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:00.490726 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:00.990700 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:00.990777 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:00.991140 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:01.490816 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:01.490903 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:01.491267 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:01.989976 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:01.990054 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:01.990417 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:02.489984 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:02.490060 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:02.490437 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:02.990175 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:02.990253 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:02.990605 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:02.990670 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:03.490193 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:03.490272 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:03.490631 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:03.990322 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:03.990405 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:03.990824 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:04.490523 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:04.490601 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:04.490958 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:04.990688 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:04.990756 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:04.991031 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:04.991073 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:05.490786 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:05.490863 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:05.491193 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:05.990885 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:05.990960 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:05.991336 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:06.489989 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:06.490060 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:06.490367 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:06.990048 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:06.990148 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:06.990505 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:07.490226 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:07.490304 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:07.490653 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:07.490707 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:07.990247 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:07.990320 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:07.990637 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:08.489976 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:08.490052 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:08.490406 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:08.990024 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:08.990129 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:08.990481 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:09.489938 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:09.490010 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:09.490353 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:09.989955 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:09.990030 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:09.990415 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:09.990473 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:10.490149 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:10.490228 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:10.490601 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:10.990609 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:10.990681 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:10.990963 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:11.490826 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:11.490912 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:11.491261 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:11.989991 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:11.990068 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:11.990456 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:11.990514 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:12.489967 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:12.490038 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:12.490323 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:12.989997 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:12.990096 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:12.990418 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:13.489963 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:13.490038 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:13.490407 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:13.990095 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:13.990171 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:13.990492 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:13.990549 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:14.489992 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:14.490067 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:14.490428 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:14.990144 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:14.990224 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:14.990592 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:15.490207 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:15.490277 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:15.490570 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:15.990543 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:15.990628 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:15.991069 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:15.991135 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:16.490872 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:16.490956 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:16.491310 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:16.989967 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:16.990053 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:16.990388 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:17.490123 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:17.490206 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:17.490561 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:17.990295 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:17.990377 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:17.990730 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:18.490453 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:18.490522 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:18.490787 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:18.490828 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:18.990605 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:18.990682 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:18.991041 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:19.490876 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:19.490953 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:19.491342 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:19.990007 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:19.990107 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:19.990413 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:20.489981 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:20.490059 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:20.490407 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:20.990447 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:20.990519 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:20.990864 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:20.990919 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:21.490621 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:21.490700 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:21.490968 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:21.990741 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:21.990818 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:21.991152 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:22.490904 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:22.490981 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:22.491320 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:22.989871 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:22.989939 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:22.990221 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:23.489972 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:23.490045 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:23.490356 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:23.490418 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:23.989980 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:23.990055 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:23.990419 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:24.489968 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:24.490036 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:24.490402 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:24.989977 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:24.990052 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:24.990381 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:25.489961 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:25.490041 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:25.490383 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:25.989925 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:25.989999 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:25.990314 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:25.990363 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:26.490009 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:26.490096 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:26.490426 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:26.990011 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:26.990100 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:26.990405 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:27.490090 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:27.490172 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:27.490501 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:27.989983 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:27.990059 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:27.990431 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:27.990490 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:28.490022 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:28.490121 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:28.490494 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:28.990112 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:28.990185 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:28.990505 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:29.490221 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:29.490292 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:29.490664 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:29.990003 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:29.990074 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:29.990426 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:30.490150 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:30.490225 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:30.490541 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:30.490592 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:30.990489 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:30.990567 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:30.990902 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:31.490527 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:31.490606 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:31.490937 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:31.990701 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:31.990772 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:31.991052 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:32.490780 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:32.490863 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:32.491194 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:32.491251 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:32.989979 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:32.990061 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:32.990468 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:33.489931 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:33.490000 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:33.490333 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:33.989999 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:33.990090 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:33.990434 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:34.490164 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:34.490237 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:34.490525 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:34.990013 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:34.990133 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:34.990419 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:34.990463 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:35.489998 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:35.490072 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:35.490402 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:35.990424 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:35.990500 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:35.990847 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:36.490629 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:36.490699 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:36.491002 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:36.990786 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:36.990862 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:36.991205 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:36.991272 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:37.489933 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:37.490007 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:37.490341 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:37.990047 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:37.990142 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:37.990426 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:38.489979 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:38.490062 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:38.490435 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:38.990029 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:38.990127 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:38.990498 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:39.490187 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:39.490257 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:39.490523 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:39.490565 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:39.989972 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:39.990052 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:39.990405 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:40.490020 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:40.490112 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:40.490458 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:40.990313 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:40.990393 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:40.990738 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:41.490512 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:41.490590 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:41.490933 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:41.490989 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:41.990751 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:41.990828 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:41.991149 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:42.489855 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:42.489927 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:42.490220 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:42.989969 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:42.990051 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:42.990392 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:43.489963 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:43.490039 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:43.490355 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:43.989943 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:43.990021 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:43.990364 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:43.990421 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:44.490068 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:44.490167 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:44.490496 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:44.989975 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:44.990048 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:44.990405 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:45.490098 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:45.490165 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:45.490445 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:45.990499 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:45.990579 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:45.990932 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:45.990988 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:46.490765 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:46.490851 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:46.491199 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:46.989903 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:46.989975 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:46.990267 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:47.489959 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:47.490040 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:47.490410 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:47.990193 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:47.990272 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:47.990633 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:48.490191 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:48.490268 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:48.490557 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:48.490605 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:48.989973 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:48.990054 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:48.990411 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:49.490121 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:49.490203 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:49.490602 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:49.990180 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:49.990256 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:49.990573 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:50.490264 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:50.490334 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:50.490701 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:50.490762 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:50.990600 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:50.990676 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:50.991023 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:51.490816 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:51.490888 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:51.491166 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:51.989883 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:51.989958 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:51.990326 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:52.489958 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:52.490029 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:52.490386 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:52.989930 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:52.990008 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:52.990311 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:52.990362 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:53.489946 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:53.490023 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:53.490371 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:53.989949 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:53.990030 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:53.990375 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:54.490788 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:54.490862 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:54.491123 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:54.990890 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:54.990969 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:54.991274 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:54.991322 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:55.489946 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:55.490034 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:55.490395 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:55.990183 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:55.990253 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:55.990532 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:56.490206 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:56.490281 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:56.490594 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:56.989987 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:56.990060 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:56.990406 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:57.490128 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:57.490205 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:57.490478 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:57.490521 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:57.990207 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:57.990282 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:57.990685 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:58.490512 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:58.490591 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:58.490950 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:58.990719 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:58.990795 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:58.991070 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:59.490852 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:59.490926 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:59.491272 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:59.491325 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:59.989994 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:59.990075 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:59.990477 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:00.491169 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:00.491258 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:00.491580 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:00.990547 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:00.990624 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:00.991006 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:01.490798 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:01.490875 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:01.491244 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:01.989992 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:01.990060 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:01.990372 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:01.990421 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:02.490072 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:02.490171 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:02.490504 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:02.990211 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:02.990296 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:02.990636 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:03.490179 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:03.490250 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:03.490508 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:03.989979 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:03.990056 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:03.990393 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:03.990451 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:04.489988 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:04.490064 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:04.490417 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:04.989923 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:04.990006 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:04.990341 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:05.490018 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:05.490111 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:05.490452 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:05.989994 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:05.990072 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:05.990422 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:05.990476 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:06.490127 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:06.490204 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:06.490473 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:06.989958 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:06.990037 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:06.990380 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:07.489982 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:07.490060 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:07.490431 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:07.990121 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:07.990189 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:07.990482 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:07.990525 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:08.489974 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:08.490066 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:08.490425 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:08.990156 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:08.990234 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:08.990576 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:09.490202 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:09.490271 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:09.490542 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:09.989972 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:09.990045 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:09.990377 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:10.490071 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:10.490165 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:10.490532 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:10.490597 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:10.990314 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:10.990397 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:10.990739 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:11.490506 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:11.490591 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:11.490943 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:11.990748 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:11.990830 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:11.991167 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:12.489852 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:12.489923 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:12.490225 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:12.989970 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:12.990044 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:12.990403 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:12.990470 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:13.489984 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:13.490059 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:13.490416 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:13.990123 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:13.990198 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:13.990462 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:14.489978 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:14.490054 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:14.490452 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:14.990189 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:14.990264 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:14.990627 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:14.990691 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:15.490186 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:15.490254 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:15.490558 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:15.990404 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:15.990479 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:15.990821 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:16.490635 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:16.490717 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:16.491027 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:16.990834 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:16.990930 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:16.991327 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:16.991378 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:17.489874 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:17.489955 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:17.490319 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:17.990029 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:17.990129 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:17.990461 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:18.489952 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:18.490024 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:18.490359 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:18.989974 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:18.990051 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:18.990415 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:19.489997 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:19.490101 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:19.490461 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:19.490518 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:19.990172 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:19.990243 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:19.990549 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:20.489961 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:20.490039 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:20.490384 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:20.990305 1440600 node_ready.go:38] duration metric: took 6m0.000552396s for node "functional-973657" to be "Ready" ...
	I1222 00:28:20.993510 1440600 out.go:203] 
	W1222 00:28:20.996431 1440600 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1222 00:28:20.996456 1440600 out.go:285] * 
	* 
	W1222 00:28:20.998594 1440600 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1222 00:28:21.002257 1440600 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:676: failed to soft start minikube. args "out/minikube-linux-arm64 start -p functional-973657 --alsologtostderr -v=8": exit status 80
functional_test.go:678: soft start took 6m5.846500088s for "functional-973657" cluster.
I1222 00:28:21.550706 1396864 config.go:182] Loaded profile config "functional-973657": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-973657
helpers_test.go:244: (dbg) docker inspect functional-973657:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "66803363da2c02a83814dda1c0764d3abdab5acc630ac08f6a997102221d51a1",
	        "Created": "2025-12-22T00:13:58.968084222Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1435135,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-22T00:13:59.032592154Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:065a636b8735485f57df2b02ed6532902f189a9c5dc304ae0ae68a778e1c9b2c",
	        "ResolvConfPath": "/var/lib/docker/containers/66803363da2c02a83814dda1c0764d3abdab5acc630ac08f6a997102221d51a1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/66803363da2c02a83814dda1c0764d3abdab5acc630ac08f6a997102221d51a1/hostname",
	        "HostsPath": "/var/lib/docker/containers/66803363da2c02a83814dda1c0764d3abdab5acc630ac08f6a997102221d51a1/hosts",
	        "LogPath": "/var/lib/docker/containers/66803363da2c02a83814dda1c0764d3abdab5acc630ac08f6a997102221d51a1/66803363da2c02a83814dda1c0764d3abdab5acc630ac08f6a997102221d51a1-json.log",
	        "Name": "/functional-973657",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-973657:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-973657",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "66803363da2c02a83814dda1c0764d3abdab5acc630ac08f6a997102221d51a1",
	                "LowerDir": "/var/lib/docker/overlay2/5e1ad55fc7940958673405b2a5d9d7701d300a0b94ebc0c871b8eb28331634c7-init/diff:/var/lib/docker/overlay2/fdfc0dccbb3b6766b7981a5669907f62e120bbe774767ccbee34e6115374625f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5e1ad55fc7940958673405b2a5d9d7701d300a0b94ebc0c871b8eb28331634c7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5e1ad55fc7940958673405b2a5d9d7701d300a0b94ebc0c871b8eb28331634c7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5e1ad55fc7940958673405b2a5d9d7701d300a0b94ebc0c871b8eb28331634c7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-973657",
	                "Source": "/var/lib/docker/volumes/functional-973657/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-973657",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-973657",
	                "name.minikube.sigs.k8s.io": "functional-973657",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9a7415dc7cc8c69d402ee20ae768f9dea6a8f4f19a78dc21532ae8d42f3e7899",
	            "SandboxKey": "/var/run/docker/netns/9a7415dc7cc8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38390"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38391"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38394"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38392"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38393"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-973657": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ee:06:b1:ad:4a:31",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1e42e4d66b505bfd04d5446c52717be20340997733545e1d4203ef38f80c0dbb",
	                    "EndpointID": "cfb1e4a2d5409c3c16cc85466e1253884fb8124967c81f53a3b06011e2792928",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-973657",
	                        "66803363da2c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-973657 -n functional-973657
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-973657 -n functional-973657: exit status 2 (329.478201ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                         ARGS                                                                          │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-722318 ssh findmnt -T /mount-9p | grep 9p                                                                                                  │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │                     │
	│ mount          │ -p functional-722318 /tmp/TestFunctionalparallelMountCmdspecific-port4022870112/001:/mount-9p --alsologtostderr -v=1 --port 45835                     │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │                     │
	│ ssh            │ functional-722318 ssh findmnt -T /mount-9p | grep 9p                                                                                                  │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ 22 Dec 25 00:13 UTC │
	│ ssh            │ functional-722318 ssh -- ls -la /mount-9p                                                                                                             │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ 22 Dec 25 00:13 UTC │
	│ ssh            │ functional-722318 ssh sudo umount -f /mount-9p                                                                                                        │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │                     │
	│ mount          │ -p functional-722318 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1841505609/001:/mount1 --alsologtostderr -v=1                                    │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │                     │
	│ mount          │ -p functional-722318 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1841505609/001:/mount2 --alsologtostderr -v=1                                    │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │                     │
	│ mount          │ -p functional-722318 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1841505609/001:/mount3 --alsologtostderr -v=1                                    │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │                     │
	│ ssh            │ functional-722318 ssh findmnt -T /mount1                                                                                                              │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ 22 Dec 25 00:13 UTC │
	│ ssh            │ functional-722318 ssh findmnt -T /mount2                                                                                                              │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ 22 Dec 25 00:13 UTC │
	│ ssh            │ functional-722318 ssh findmnt -T /mount3                                                                                                              │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ 22 Dec 25 00:13 UTC │
	│ mount          │ -p functional-722318 --kill=true                                                                                                                      │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │                     │
	│ update-context │ functional-722318 update-context --alsologtostderr -v=2                                                                                               │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ 22 Dec 25 00:13 UTC │
	│ update-context │ functional-722318 update-context --alsologtostderr -v=2                                                                                               │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ 22 Dec 25 00:13 UTC │
	│ update-context │ functional-722318 update-context --alsologtostderr -v=2                                                                                               │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ 22 Dec 25 00:13 UTC │
	│ image          │ functional-722318 image ls --format short --alsologtostderr                                                                                           │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ 22 Dec 25 00:13 UTC │
	│ image          │ functional-722318 image ls --format yaml --alsologtostderr                                                                                            │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ 22 Dec 25 00:13 UTC │
	│ ssh            │ functional-722318 ssh pgrep buildkitd                                                                                                                 │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │                     │
	│ image          │ functional-722318 image build -t localhost/my-image:functional-722318 testdata/build --alsologtostderr                                                │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ 22 Dec 25 00:13 UTC │
	│ image          │ functional-722318 image ls --format json --alsologtostderr                                                                                            │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ 22 Dec 25 00:13 UTC │
	│ image          │ functional-722318 image ls --format table --alsologtostderr                                                                                           │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ 22 Dec 25 00:13 UTC │
	│ image          │ functional-722318 image ls                                                                                                                            │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ 22 Dec 25 00:13 UTC │
	│ delete         │ -p functional-722318                                                                                                                                  │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ 22 Dec 25 00:13 UTC │
	│ start          │ -p functional-973657 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │                     │
	│ start          │ -p functional-973657 --alsologtostderr -v=8                                                                                                           │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:22 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/22 00:22:15
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1222 00:22:15.745982 1440600 out.go:360] Setting OutFile to fd 1 ...
	I1222 00:22:15.746211 1440600 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:22:15.746249 1440600 out.go:374] Setting ErrFile to fd 2...
	I1222 00:22:15.746270 1440600 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:22:15.746555 1440600 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1395000/.minikube/bin
	I1222 00:22:15.747001 1440600 out.go:368] Setting JSON to false
	I1222 00:22:15.747938 1440600 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":111889,"bootTime":1766251047,"procs":168,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1222 00:22:15.748043 1440600 start.go:143] virtualization:  
	I1222 00:22:15.753569 1440600 out.go:179] * [functional-973657] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1222 00:22:15.756598 1440600 out.go:179]   - MINIKUBE_LOCATION=22179
	I1222 00:22:15.756741 1440600 notify.go:221] Checking for updates...
	I1222 00:22:15.762722 1440600 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 00:22:15.765671 1440600 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-1395000/kubeconfig
	I1222 00:22:15.768657 1440600 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1395000/.minikube
	I1222 00:22:15.771623 1440600 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1222 00:22:15.774619 1440600 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 00:22:15.777830 1440600 config.go:182] Loaded profile config "functional-973657": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1222 00:22:15.777978 1440600 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 00:22:15.812917 1440600 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1222 00:22:15.813051 1440600 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 00:22:15.874179 1440600 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 00:22:15.864674601 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 00:22:15.874289 1440600 docker.go:319] overlay module found
	I1222 00:22:15.877302 1440600 out.go:179] * Using the docker driver based on existing profile
	I1222 00:22:15.880104 1440600 start.go:309] selected driver: docker
	I1222 00:22:15.880124 1440600 start.go:928] validating driver "docker" against &{Name:functional-973657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-973657 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableC
oreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 00:22:15.880226 1440600 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 00:22:15.880331 1440600 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 00:22:15.936346 1440600 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 00:22:15.927222796 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 00:22:15.936748 1440600 cni.go:84] Creating CNI manager for ""
	I1222 00:22:15.936818 1440600 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1222 00:22:15.936877 1440600 start.go:353] cluster config:
	{Name:functional-973657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-973657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 00:22:15.939915 1440600 out.go:179] * Starting "functional-973657" primary control-plane node in "functional-973657" cluster
	I1222 00:22:15.942690 1440600 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1222 00:22:15.945666 1440600 out.go:179] * Pulling base image v0.0.48-1766219634-22260 ...
	I1222 00:22:15.948535 1440600 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1222 00:22:15.948600 1440600 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4
	I1222 00:22:15.948615 1440600 cache.go:65] Caching tarball of preloaded images
	I1222 00:22:15.948645 1440600 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1222 00:22:15.948702 1440600 preload.go:251] Found /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1222 00:22:15.948713 1440600 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on containerd
	I1222 00:22:15.948830 1440600 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/config.json ...
	I1222 00:22:15.969249 1440600 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon, skipping pull
	I1222 00:22:15.969274 1440600 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 exists in daemon, skipping load
	I1222 00:22:15.969294 1440600 cache.go:243] Successfully downloaded all kic artifacts
	I1222 00:22:15.969326 1440600 start.go:360] acquireMachinesLock for functional-973657: {Name:mk23c5c2a3abc92310900db50002bad061b76c2b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 00:22:15.969396 1440600 start.go:364] duration metric: took 41.633µs to acquireMachinesLock for "functional-973657"
	I1222 00:22:15.969420 1440600 start.go:96] Skipping create...Using existing machine configuration
	I1222 00:22:15.969432 1440600 fix.go:54] fixHost starting: 
	I1222 00:22:15.969697 1440600 cli_runner.go:164] Run: docker container inspect functional-973657 --format={{.State.Status}}
	I1222 00:22:15.991071 1440600 fix.go:112] recreateIfNeeded on functional-973657: state=Running err=<nil>
	W1222 00:22:15.991104 1440600 fix.go:138] unexpected machine state, will restart: <nil>
	I1222 00:22:15.994289 1440600 out.go:252] * Updating the running docker "functional-973657" container ...
	I1222 00:22:15.994325 1440600 machine.go:94] provisionDockerMachine start ...
	I1222 00:22:15.994407 1440600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
	I1222 00:22:16.016696 1440600 main.go:144] libmachine: Using SSH client type: native
	I1222 00:22:16.017052 1440600 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38390 <nil> <nil>}
	I1222 00:22:16.017069 1440600 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 00:22:16.150117 1440600 main.go:144] libmachine: SSH cmd err, output: <nil>: functional-973657
	
	I1222 00:22:16.150145 1440600 ubuntu.go:182] provisioning hostname "functional-973657"
	I1222 00:22:16.150214 1440600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
	I1222 00:22:16.171110 1440600 main.go:144] libmachine: Using SSH client type: native
	I1222 00:22:16.171503 1440600 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38390 <nil> <nil>}
	I1222 00:22:16.171525 1440600 main.go:144] libmachine: About to run SSH command:
	sudo hostname functional-973657 && echo "functional-973657" | sudo tee /etc/hostname
	I1222 00:22:16.320804 1440600 main.go:144] libmachine: SSH cmd err, output: <nil>: functional-973657
	
	I1222 00:22:16.320911 1440600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
	I1222 00:22:16.341102 1440600 main.go:144] libmachine: Using SSH client type: native
	I1222 00:22:16.341468 1440600 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38390 <nil> <nil>}
	I1222 00:22:16.341492 1440600 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-973657' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-973657/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-973657' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1222 00:22:16.474666 1440600 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 00:22:16.474761 1440600 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22179-1395000/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-1395000/.minikube}
	I1222 00:22:16.474804 1440600 ubuntu.go:190] setting up certificates
	I1222 00:22:16.474823 1440600 provision.go:84] configureAuth start
	I1222 00:22:16.474894 1440600 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-973657
	I1222 00:22:16.493393 1440600 provision.go:143] copyHostCerts
	I1222 00:22:16.493439 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.pem
	I1222 00:22:16.493474 1440600 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.pem, removing ...
	I1222 00:22:16.493495 1440600 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.pem
	I1222 00:22:16.493578 1440600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.pem (1082 bytes)
	I1222 00:22:16.493680 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22179-1395000/.minikube/cert.pem
	I1222 00:22:16.493704 1440600 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1395000/.minikube/cert.pem, removing ...
	I1222 00:22:16.493715 1440600 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1395000/.minikube/cert.pem
	I1222 00:22:16.493744 1440600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-1395000/.minikube/cert.pem (1123 bytes)
	I1222 00:22:16.493808 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22179-1395000/.minikube/key.pem
	I1222 00:22:16.493831 1440600 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1395000/.minikube/key.pem, removing ...
	I1222 00:22:16.493837 1440600 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1395000/.minikube/key.pem
	I1222 00:22:16.493863 1440600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-1395000/.minikube/key.pem (1679 bytes)
	I1222 00:22:16.493929 1440600 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca-key.pem org=jenkins.functional-973657 san=[127.0.0.1 192.168.49.2 functional-973657 localhost minikube]
	I1222 00:22:16.688332 1440600 provision.go:177] copyRemoteCerts
	I1222 00:22:16.688423 1440600 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1222 00:22:16.688474 1440600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
	I1222 00:22:16.708412 1440600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38390 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/functional-973657/id_rsa Username:docker}
	I1222 00:22:16.807036 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1222 00:22:16.807104 1440600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1222 00:22:16.826203 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1222 00:22:16.826269 1440600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1222 00:22:16.844818 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1222 00:22:16.844882 1440600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1222 00:22:16.862814 1440600 provision.go:87] duration metric: took 387.965654ms to configureAuth
	I1222 00:22:16.862846 1440600 ubuntu.go:206] setting minikube options for container-runtime
	I1222 00:22:16.863040 1440600 config.go:182] Loaded profile config "functional-973657": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1222 00:22:16.863055 1440600 machine.go:97] duration metric: took 868.721817ms to provisionDockerMachine
	I1222 00:22:16.863063 1440600 start.go:293] postStartSetup for "functional-973657" (driver="docker")
	I1222 00:22:16.863075 1440600 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1222 00:22:16.863140 1440600 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1222 00:22:16.863187 1440600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
	I1222 00:22:16.881215 1440600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38390 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/functional-973657/id_rsa Username:docker}
	I1222 00:22:16.978224 1440600 ssh_runner.go:195] Run: cat /etc/os-release
	I1222 00:22:16.981674 1440600 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1222 00:22:16.981697 1440600 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1222 00:22:16.981701 1440600 command_runner.go:130] > VERSION_ID="12"
	I1222 00:22:16.981706 1440600 command_runner.go:130] > VERSION="12 (bookworm)"
	I1222 00:22:16.981711 1440600 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1222 00:22:16.981715 1440600 command_runner.go:130] > ID=debian
	I1222 00:22:16.981720 1440600 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1222 00:22:16.981726 1440600 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1222 00:22:16.981732 1440600 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1222 00:22:16.981781 1440600 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1222 00:22:16.981805 1440600 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1222 00:22:16.981817 1440600 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1395000/.minikube/addons for local assets ...
	I1222 00:22:16.981874 1440600 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1395000/.minikube/files for local assets ...
	I1222 00:22:16.981966 1440600 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem -> 13968642.pem in /etc/ssl/certs
	I1222 00:22:16.981976 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem -> /etc/ssl/certs/13968642.pem
	I1222 00:22:16.982050 1440600 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/test/nested/copy/1396864/hosts -> hosts in /etc/test/nested/copy/1396864
	I1222 00:22:16.982058 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/test/nested/copy/1396864/hosts -> /etc/test/nested/copy/1396864/hosts
	I1222 00:22:16.982135 1440600 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1396864
	I1222 00:22:16.991499 1440600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem --> /etc/ssl/certs/13968642.pem (1708 bytes)
	I1222 00:22:17.014617 1440600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/test/nested/copy/1396864/hosts --> /etc/test/nested/copy/1396864/hosts (40 bytes)
	I1222 00:22:17.034289 1440600 start.go:296] duration metric: took 171.210875ms for postStartSetup
	I1222 00:22:17.034373 1440600 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 00:22:17.034421 1440600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
	I1222 00:22:17.055784 1440600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38390 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/functional-973657/id_rsa Username:docker}
	I1222 00:22:17.151461 1440600 command_runner.go:130] > 11%
	I1222 00:22:17.151551 1440600 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1222 00:22:17.156056 1440600 command_runner.go:130] > 174G
	I1222 00:22:17.156550 1440600 fix.go:56] duration metric: took 1.187112425s for fixHost
	I1222 00:22:17.156572 1440600 start.go:83] releasing machines lock for "functional-973657", held for 1.187162091s
	I1222 00:22:17.156642 1440600 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-973657
	I1222 00:22:17.174525 1440600 ssh_runner.go:195] Run: cat /version.json
	I1222 00:22:17.174589 1440600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
	I1222 00:22:17.174652 1440600 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1222 00:22:17.174714 1440600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
	I1222 00:22:17.196471 1440600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38390 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/functional-973657/id_rsa Username:docker}
	I1222 00:22:17.199230 1440600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38390 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/functional-973657/id_rsa Username:docker}
	I1222 00:22:17.379176 1440600 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1222 00:22:17.379235 1440600 command_runner.go:130] > {"iso_version": "v1.37.0-1765965980-22186", "kicbase_version": "v0.0.48-1766219634-22260", "minikube_version": "v1.37.0", "commit": "84997fca2a3b77f8e0b5b5ebeca663f85f924cfc"}
	I1222 00:22:17.379354 1440600 ssh_runner.go:195] Run: systemctl --version
	I1222 00:22:17.385410 1440600 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1222 00:22:17.385465 1440600 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1222 00:22:17.385880 1440600 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1222 00:22:17.390276 1440600 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1222 00:22:17.390418 1440600 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1222 00:22:17.390488 1440600 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1222 00:22:17.398542 1440600 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1222 00:22:17.398570 1440600 start.go:496] detecting cgroup driver to use...
	I1222 00:22:17.398621 1440600 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 00:22:17.398692 1440600 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1222 00:22:17.414048 1440600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1222 00:22:17.427185 1440600 docker.go:218] disabling cri-docker service (if available) ...
	I1222 00:22:17.427253 1440600 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1222 00:22:17.442685 1440600 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1222 00:22:17.455696 1440600 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1222 00:22:17.577927 1440600 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1222 00:22:17.693641 1440600 docker.go:234] disabling docker service ...
	I1222 00:22:17.693740 1440600 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1222 00:22:17.714854 1440600 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1222 00:22:17.729523 1440600 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1222 00:22:17.852439 1440600 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1222 00:22:17.963077 1440600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1222 00:22:17.977041 1440600 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 00:22:17.991276 1440600 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1222 00:22:17.992369 1440600 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1222 00:22:18.003034 1440600 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1222 00:22:18.019363 1440600 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1222 00:22:18.019441 1440600 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1222 00:22:18.030259 1440600 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1222 00:22:18.041222 1440600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1222 00:22:18.051429 1440600 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1222 00:22:18.060629 1440600 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1222 00:22:18.069455 1440600 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1222 00:22:18.079294 1440600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1222 00:22:18.088607 1440600 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1222 00:22:18.097955 1440600 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1222 00:22:18.105014 1440600 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1222 00:22:18.106002 1440600 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1222 00:22:18.114147 1440600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 00:22:18.224816 1440600 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1222 00:22:18.353040 1440600 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1222 00:22:18.353118 1440600 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1222 00:22:18.356934 1440600 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I1222 00:22:18.357009 1440600 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1222 00:22:18.357030 1440600 command_runner.go:130] > Device: 0,72	Inode: 1616        Links: 1
	I1222 00:22:18.357053 1440600 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1222 00:22:18.357086 1440600 command_runner.go:130] > Access: 2025-12-22 00:22:18.309838958 +0000
	I1222 00:22:18.357111 1440600 command_runner.go:130] > Modify: 2025-12-22 00:22:18.309838958 +0000
	I1222 00:22:18.357132 1440600 command_runner.go:130] > Change: 2025-12-22 00:22:18.309838958 +0000
	I1222 00:22:18.357178 1440600 command_runner.go:130] >  Birth: -
	I1222 00:22:18.357507 1440600 start.go:564] Will wait 60s for crictl version
	I1222 00:22:18.357612 1440600 ssh_runner.go:195] Run: which crictl
	I1222 00:22:18.361021 1440600 command_runner.go:130] > /usr/local/bin/crictl
	I1222 00:22:18.361396 1440600 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1222 00:22:18.384093 1440600 command_runner.go:130] > Version:  0.1.0
	I1222 00:22:18.384169 1440600 command_runner.go:130] > RuntimeName:  containerd
	I1222 00:22:18.384205 1440600 command_runner.go:130] > RuntimeVersion:  v2.2.1
	I1222 00:22:18.384240 1440600 command_runner.go:130] > RuntimeApiVersion:  v1
	I1222 00:22:18.386573 1440600 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.1
	RuntimeApiVersion:  v1
	I1222 00:22:18.386687 1440600 ssh_runner.go:195] Run: containerd --version
	I1222 00:22:18.407693 1440600 command_runner.go:130] > containerd containerd.io v2.2.1 dea7da592f5d1d2b7755e3a161be07f43fad8f75
	I1222 00:22:18.410154 1440600 ssh_runner.go:195] Run: containerd --version
	I1222 00:22:18.429567 1440600 command_runner.go:130] > containerd containerd.io v2.2.1 dea7da592f5d1d2b7755e3a161be07f43fad8f75
	I1222 00:22:18.437868 1440600 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.1 ...
	I1222 00:22:18.440703 1440600 cli_runner.go:164] Run: docker network inspect functional-973657 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 00:22:18.457963 1440600 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1222 00:22:18.462339 1440600 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1222 00:22:18.462457 1440600 kubeadm.go:884] updating cluster {Name:functional-973657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-973657 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1222 00:22:18.462560 1440600 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1222 00:22:18.462639 1440600 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 00:22:18.493006 1440600 command_runner.go:130] > {
	I1222 00:22:18.493026 1440600 command_runner.go:130] >   "images":  [
	I1222 00:22:18.493030 1440600 command_runner.go:130] >     {
	I1222 00:22:18.493040 1440600 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1222 00:22:18.493045 1440600 command_runner.go:130] >       "repoTags":  [
	I1222 00:22:18.493051 1440600 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1222 00:22:18.493055 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.493059 1440600 command_runner.go:130] >       "repoDigests":  [
	I1222 00:22:18.493072 1440600 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1222 00:22:18.493076 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.493081 1440600 command_runner.go:130] >       "size":  "40636774",
	I1222 00:22:18.493085 1440600 command_runner.go:130] >       "username":  "",
	I1222 00:22:18.493089 1440600 command_runner.go:130] >       "pinned":  false
	I1222 00:22:18.493092 1440600 command_runner.go:130] >     },
	I1222 00:22:18.493095 1440600 command_runner.go:130] >     {
	I1222 00:22:18.493102 1440600 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1222 00:22:18.493106 1440600 command_runner.go:130] >       "repoTags":  [
	I1222 00:22:18.493112 1440600 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1222 00:22:18.493116 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.493120 1440600 command_runner.go:130] >       "repoDigests":  [
	I1222 00:22:18.493128 1440600 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1222 00:22:18.493135 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.493139 1440600 command_runner.go:130] >       "size":  "8034419",
	I1222 00:22:18.493143 1440600 command_runner.go:130] >       "username":  "",
	I1222 00:22:18.493147 1440600 command_runner.go:130] >       "pinned":  false
	I1222 00:22:18.493150 1440600 command_runner.go:130] >     },
	I1222 00:22:18.493153 1440600 command_runner.go:130] >     {
	I1222 00:22:18.493162 1440600 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1222 00:22:18.493166 1440600 command_runner.go:130] >       "repoTags":  [
	I1222 00:22:18.493171 1440600 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1222 00:22:18.493178 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.493186 1440600 command_runner.go:130] >       "repoDigests":  [
	I1222 00:22:18.493194 1440600 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1222 00:22:18.493197 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.493201 1440600 command_runner.go:130] >       "size":  "21168808",
	I1222 00:22:18.493206 1440600 command_runner.go:130] >       "username":  "nonroot",
	I1222 00:22:18.493210 1440600 command_runner.go:130] >       "pinned":  false
	I1222 00:22:18.493213 1440600 command_runner.go:130] >     },
	I1222 00:22:18.493216 1440600 command_runner.go:130] >     {
	I1222 00:22:18.493223 1440600 command_runner.go:130] >       "id":  "sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57",
	I1222 00:22:18.493227 1440600 command_runner.go:130] >       "repoTags":  [
	I1222 00:22:18.493231 1440600 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.6-0"
	I1222 00:22:18.493235 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.493238 1440600 command_runner.go:130] >       "repoDigests":  [
	I1222 00:22:18.493246 1440600 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890"
	I1222 00:22:18.493249 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.493253 1440600 command_runner.go:130] >       "size":  "21749640",
	I1222 00:22:18.493258 1440600 command_runner.go:130] >       "uid":  {
	I1222 00:22:18.493261 1440600 command_runner.go:130] >         "value":  "0"
	I1222 00:22:18.493264 1440600 command_runner.go:130] >       },
	I1222 00:22:18.493268 1440600 command_runner.go:130] >       "username":  "",
	I1222 00:22:18.493271 1440600 command_runner.go:130] >       "pinned":  false
	I1222 00:22:18.493275 1440600 command_runner.go:130] >     },
	I1222 00:22:18.493278 1440600 command_runner.go:130] >     {
	I1222 00:22:18.493285 1440600 command_runner.go:130] >       "id":  "sha256:3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54",
	I1222 00:22:18.493289 1440600 command_runner.go:130] >       "repoTags":  [
	I1222 00:22:18.493294 1440600 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-rc.1"
	I1222 00:22:18.493297 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.493300 1440600 command_runner.go:130] >       "repoDigests":  [
	I1222 00:22:18.493308 1440600 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:58367b5c0428495c0c12411fa7a018f5d40fe57307b85d8935b1ed35706ff7ee"
	I1222 00:22:18.493311 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.493316 1440600 command_runner.go:130] >       "size":  "24692223",
	I1222 00:22:18.493319 1440600 command_runner.go:130] >       "uid":  {
	I1222 00:22:18.493335 1440600 command_runner.go:130] >         "value":  "0"
	I1222 00:22:18.493338 1440600 command_runner.go:130] >       },
	I1222 00:22:18.493342 1440600 command_runner.go:130] >       "username":  "",
	I1222 00:22:18.493346 1440600 command_runner.go:130] >       "pinned":  false
	I1222 00:22:18.493349 1440600 command_runner.go:130] >     },
	I1222 00:22:18.493352 1440600 command_runner.go:130] >     {
	I1222 00:22:18.493359 1440600 command_runner.go:130] >       "id":  "sha256:a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a",
	I1222 00:22:18.493362 1440600 command_runner.go:130] >       "repoTags":  [
	I1222 00:22:18.493368 1440600 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1"
	I1222 00:22:18.493371 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.493374 1440600 command_runner.go:130] >       "repoDigests":  [
	I1222 00:22:18.493383 1440600 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:57ab0f75f58d99f4be7bff7bdda015fcbf1b7c20e58ba2722c8c39f751dc8c98"
	I1222 00:22:18.493386 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.493389 1440600 command_runner.go:130] >       "size":  "20672157",
	I1222 00:22:18.493393 1440600 command_runner.go:130] >       "uid":  {
	I1222 00:22:18.493396 1440600 command_runner.go:130] >         "value":  "0"
	I1222 00:22:18.493399 1440600 command_runner.go:130] >       },
	I1222 00:22:18.493403 1440600 command_runner.go:130] >       "username":  "",
	I1222 00:22:18.493407 1440600 command_runner.go:130] >       "pinned":  false
	I1222 00:22:18.493410 1440600 command_runner.go:130] >     },
	I1222 00:22:18.493413 1440600 command_runner.go:130] >     {
	I1222 00:22:18.493420 1440600 command_runner.go:130] >       "id":  "sha256:7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e",
	I1222 00:22:18.493423 1440600 command_runner.go:130] >       "repoTags":  [
	I1222 00:22:18.493429 1440600 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-rc.1"
	I1222 00:22:18.493432 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.493435 1440600 command_runner.go:130] >       "repoDigests":  [
	I1222 00:22:18.493443 1440600 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bdd1fa8b53558a2e1967379a36b085c93faf15581e5fa9f212baf679d89c5bb5"
	I1222 00:22:18.493446 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.493450 1440600 command_runner.go:130] >       "size":  "22432301",
	I1222 00:22:18.493454 1440600 command_runner.go:130] >       "username":  "",
	I1222 00:22:18.493457 1440600 command_runner.go:130] >       "pinned":  false
	I1222 00:22:18.493460 1440600 command_runner.go:130] >     },
	I1222 00:22:18.493464 1440600 command_runner.go:130] >     {
	I1222 00:22:18.493475 1440600 command_runner.go:130] >       "id":  "sha256:abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde",
	I1222 00:22:18.493479 1440600 command_runner.go:130] >       "repoTags":  [
	I1222 00:22:18.493484 1440600 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-rc.1"
	I1222 00:22:18.493487 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.493491 1440600 command_runner.go:130] >       "repoDigests":  [
	I1222 00:22:18.493498 1440600 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8155e3db27c7081abfc8eb5da70820cfeaf0bba7449e45360e8220e670f417d3"
	I1222 00:22:18.493501 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.493505 1440600 command_runner.go:130] >       "size":  "15405535",
	I1222 00:22:18.493509 1440600 command_runner.go:130] >       "uid":  {
	I1222 00:22:18.493512 1440600 command_runner.go:130] >         "value":  "0"
	I1222 00:22:18.493516 1440600 command_runner.go:130] >       },
	I1222 00:22:18.493519 1440600 command_runner.go:130] >       "username":  "",
	I1222 00:22:18.493523 1440600 command_runner.go:130] >       "pinned":  false
	I1222 00:22:18.493526 1440600 command_runner.go:130] >     },
	I1222 00:22:18.493529 1440600 command_runner.go:130] >     {
	I1222 00:22:18.493536 1440600 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1222 00:22:18.493539 1440600 command_runner.go:130] >       "repoTags":  [
	I1222 00:22:18.493543 1440600 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1222 00:22:18.493547 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.493550 1440600 command_runner.go:130] >       "repoDigests":  [
	I1222 00:22:18.493557 1440600 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1222 00:22:18.493560 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.493564 1440600 command_runner.go:130] >       "size":  "267939",
	I1222 00:22:18.493568 1440600 command_runner.go:130] >       "uid":  {
	I1222 00:22:18.493571 1440600 command_runner.go:130] >         "value":  "65535"
	I1222 00:22:18.493575 1440600 command_runner.go:130] >       },
	I1222 00:22:18.493579 1440600 command_runner.go:130] >       "username":  "",
	I1222 00:22:18.493582 1440600 command_runner.go:130] >       "pinned":  true
	I1222 00:22:18.493585 1440600 command_runner.go:130] >     }
	I1222 00:22:18.493588 1440600 command_runner.go:130] >   ]
	I1222 00:22:18.493591 1440600 command_runner.go:130] > }
	I1222 00:22:18.493746 1440600 containerd.go:627] all images are preloaded for containerd runtime.
	I1222 00:22:18.493754 1440600 containerd.go:534] Images already preloaded, skipping extraction
	I1222 00:22:18.493814 1440600 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 00:22:18.517780 1440600 command_runner.go:130] > {
	I1222 00:22:18.517799 1440600 command_runner.go:130] >   "images":  [
	I1222 00:22:18.517803 1440600 command_runner.go:130] >     {
	I1222 00:22:18.517813 1440600 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1222 00:22:18.517818 1440600 command_runner.go:130] >       "repoTags":  [
	I1222 00:22:18.517824 1440600 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1222 00:22:18.517827 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.517831 1440600 command_runner.go:130] >       "repoDigests":  [
	I1222 00:22:18.517839 1440600 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1222 00:22:18.517843 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.517856 1440600 command_runner.go:130] >       "size":  "40636774",
	I1222 00:22:18.517861 1440600 command_runner.go:130] >       "username":  "",
	I1222 00:22:18.517865 1440600 command_runner.go:130] >       "pinned":  false
	I1222 00:22:18.517867 1440600 command_runner.go:130] >     },
	I1222 00:22:18.517870 1440600 command_runner.go:130] >     {
	I1222 00:22:18.517878 1440600 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1222 00:22:18.517882 1440600 command_runner.go:130] >       "repoTags":  [
	I1222 00:22:18.517887 1440600 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1222 00:22:18.517890 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.517894 1440600 command_runner.go:130] >       "repoDigests":  [
	I1222 00:22:18.517902 1440600 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1222 00:22:18.517906 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.517910 1440600 command_runner.go:130] >       "size":  "8034419",
	I1222 00:22:18.517913 1440600 command_runner.go:130] >       "username":  "",
	I1222 00:22:18.517917 1440600 command_runner.go:130] >       "pinned":  false
	I1222 00:22:18.517920 1440600 command_runner.go:130] >     },
	I1222 00:22:18.517923 1440600 command_runner.go:130] >     {
	I1222 00:22:18.517930 1440600 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1222 00:22:18.517934 1440600 command_runner.go:130] >       "repoTags":  [
	I1222 00:22:18.517939 1440600 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1222 00:22:18.517942 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.517947 1440600 command_runner.go:130] >       "repoDigests":  [
	I1222 00:22:18.517955 1440600 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1222 00:22:18.517958 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.517962 1440600 command_runner.go:130] >       "size":  "21168808",
	I1222 00:22:18.517966 1440600 command_runner.go:130] >       "username":  "nonroot",
	I1222 00:22:18.517970 1440600 command_runner.go:130] >       "pinned":  false
	I1222 00:22:18.517974 1440600 command_runner.go:130] >     },
	I1222 00:22:18.517977 1440600 command_runner.go:130] >     {
	I1222 00:22:18.517983 1440600 command_runner.go:130] >       "id":  "sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57",
	I1222 00:22:18.517987 1440600 command_runner.go:130] >       "repoTags":  [
	I1222 00:22:18.517992 1440600 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.6-0"
	I1222 00:22:18.517995 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.518002 1440600 command_runner.go:130] >       "repoDigests":  [
	I1222 00:22:18.518010 1440600 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890"
	I1222 00:22:18.518013 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.518017 1440600 command_runner.go:130] >       "size":  "21749640",
	I1222 00:22:18.518022 1440600 command_runner.go:130] >       "uid":  {
	I1222 00:22:18.518026 1440600 command_runner.go:130] >         "value":  "0"
	I1222 00:22:18.518029 1440600 command_runner.go:130] >       },
	I1222 00:22:18.518033 1440600 command_runner.go:130] >       "username":  "",
	I1222 00:22:18.518037 1440600 command_runner.go:130] >       "pinned":  false
	I1222 00:22:18.518041 1440600 command_runner.go:130] >     },
	I1222 00:22:18.518043 1440600 command_runner.go:130] >     {
	I1222 00:22:18.518050 1440600 command_runner.go:130] >       "id":  "sha256:3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54",
	I1222 00:22:18.518054 1440600 command_runner.go:130] >       "repoTags":  [
	I1222 00:22:18.518059 1440600 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-rc.1"
	I1222 00:22:18.518062 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.518066 1440600 command_runner.go:130] >       "repoDigests":  [
	I1222 00:22:18.518073 1440600 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:58367b5c0428495c0c12411fa7a018f5d40fe57307b85d8935b1ed35706ff7ee"
	I1222 00:22:18.518098 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.518103 1440600 command_runner.go:130] >       "size":  "24692223",
	I1222 00:22:18.518106 1440600 command_runner.go:130] >       "uid":  {
	I1222 00:22:18.518115 1440600 command_runner.go:130] >         "value":  "0"
	I1222 00:22:18.518118 1440600 command_runner.go:130] >       },
	I1222 00:22:18.518122 1440600 command_runner.go:130] >       "username":  "",
	I1222 00:22:18.518125 1440600 command_runner.go:130] >       "pinned":  false
	I1222 00:22:18.518128 1440600 command_runner.go:130] >     },
	I1222 00:22:18.518131 1440600 command_runner.go:130] >     {
	I1222 00:22:18.518142 1440600 command_runner.go:130] >       "id":  "sha256:a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a",
	I1222 00:22:18.518146 1440600 command_runner.go:130] >       "repoTags":  [
	I1222 00:22:18.518151 1440600 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1"
	I1222 00:22:18.518155 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.518158 1440600 command_runner.go:130] >       "repoDigests":  [
	I1222 00:22:18.518166 1440600 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:57ab0f75f58d99f4be7bff7bdda015fcbf1b7c20e58ba2722c8c39f751dc8c98"
	I1222 00:22:18.518170 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.518178 1440600 command_runner.go:130] >       "size":  "20672157",
	I1222 00:22:18.518182 1440600 command_runner.go:130] >       "uid":  {
	I1222 00:22:18.518185 1440600 command_runner.go:130] >         "value":  "0"
	I1222 00:22:18.518188 1440600 command_runner.go:130] >       },
	I1222 00:22:18.518192 1440600 command_runner.go:130] >       "username":  "",
	I1222 00:22:18.518195 1440600 command_runner.go:130] >       "pinned":  false
	I1222 00:22:18.518198 1440600 command_runner.go:130] >     },
	I1222 00:22:18.518202 1440600 command_runner.go:130] >     {
	I1222 00:22:18.518209 1440600 command_runner.go:130] >       "id":  "sha256:7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e",
	I1222 00:22:18.518212 1440600 command_runner.go:130] >       "repoTags":  [
	I1222 00:22:18.518217 1440600 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-rc.1"
	I1222 00:22:18.518220 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.518224 1440600 command_runner.go:130] >       "repoDigests":  [
	I1222 00:22:18.518231 1440600 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bdd1fa8b53558a2e1967379a36b085c93faf15581e5fa9f212baf679d89c5bb5"
	I1222 00:22:18.518235 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.518239 1440600 command_runner.go:130] >       "size":  "22432301",
	I1222 00:22:18.518242 1440600 command_runner.go:130] >       "username":  "",
	I1222 00:22:18.518246 1440600 command_runner.go:130] >       "pinned":  false
	I1222 00:22:18.518249 1440600 command_runner.go:130] >     },
	I1222 00:22:18.518253 1440600 command_runner.go:130] >     {
	I1222 00:22:18.518260 1440600 command_runner.go:130] >       "id":  "sha256:abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde",
	I1222 00:22:18.518264 1440600 command_runner.go:130] >       "repoTags":  [
	I1222 00:22:18.518269 1440600 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-rc.1"
	I1222 00:22:18.518273 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.518277 1440600 command_runner.go:130] >       "repoDigests":  [
	I1222 00:22:18.518285 1440600 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8155e3db27c7081abfc8eb5da70820cfeaf0bba7449e45360e8220e670f417d3"
	I1222 00:22:18.518288 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.518292 1440600 command_runner.go:130] >       "size":  "15405535",
	I1222 00:22:18.518295 1440600 command_runner.go:130] >       "uid":  {
	I1222 00:22:18.518299 1440600 command_runner.go:130] >         "value":  "0"
	I1222 00:22:18.518302 1440600 command_runner.go:130] >       },
	I1222 00:22:18.518306 1440600 command_runner.go:130] >       "username":  "",
	I1222 00:22:18.518310 1440600 command_runner.go:130] >       "pinned":  false
	I1222 00:22:18.518318 1440600 command_runner.go:130] >     },
	I1222 00:22:18.518322 1440600 command_runner.go:130] >     {
	I1222 00:22:18.518328 1440600 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1222 00:22:18.518332 1440600 command_runner.go:130] >       "repoTags":  [
	I1222 00:22:18.518337 1440600 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1222 00:22:18.518340 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.518344 1440600 command_runner.go:130] >       "repoDigests":  [
	I1222 00:22:18.518352 1440600 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1222 00:22:18.518355 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.518358 1440600 command_runner.go:130] >       "size":  "267939",
	I1222 00:22:18.518362 1440600 command_runner.go:130] >       "uid":  {
	I1222 00:22:18.518366 1440600 command_runner.go:130] >         "value":  "65535"
	I1222 00:22:18.518371 1440600 command_runner.go:130] >       },
	I1222 00:22:18.518375 1440600 command_runner.go:130] >       "username":  "",
	I1222 00:22:18.518379 1440600 command_runner.go:130] >       "pinned":  true
	I1222 00:22:18.518388 1440600 command_runner.go:130] >     }
	I1222 00:22:18.518391 1440600 command_runner.go:130] >   ]
	I1222 00:22:18.518397 1440600 command_runner.go:130] > }
	I1222 00:22:18.524524 1440600 containerd.go:627] all images are preloaded for containerd runtime.
	I1222 00:22:18.524599 1440600 cache_images.go:86] Images are preloaded, skipping loading
	I1222 00:22:18.524620 1440600 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 containerd true true} ...
	I1222 00:22:18.524759 1440600 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-973657 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-973657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1222 00:22:18.524857 1440600 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
	I1222 00:22:18.549454 1440600 command_runner.go:130] > {
	I1222 00:22:18.549479 1440600 command_runner.go:130] >   "cniconfig": {
	I1222 00:22:18.549486 1440600 command_runner.go:130] >     "Networks": [
	I1222 00:22:18.549489 1440600 command_runner.go:130] >       {
	I1222 00:22:18.549495 1440600 command_runner.go:130] >         "Config": {
	I1222 00:22:18.549500 1440600 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I1222 00:22:18.549519 1440600 command_runner.go:130] >           "Name": "cni-loopback",
	I1222 00:22:18.549527 1440600 command_runner.go:130] >           "Plugins": [
	I1222 00:22:18.549530 1440600 command_runner.go:130] >             {
	I1222 00:22:18.549541 1440600 command_runner.go:130] >               "Network": {
	I1222 00:22:18.549546 1440600 command_runner.go:130] >                 "ipam": {},
	I1222 00:22:18.549551 1440600 command_runner.go:130] >                 "type": "loopback"
	I1222 00:22:18.549560 1440600 command_runner.go:130] >               },
	I1222 00:22:18.549566 1440600 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I1222 00:22:18.549570 1440600 command_runner.go:130] >             }
	I1222 00:22:18.549579 1440600 command_runner.go:130] >           ],
	I1222 00:22:18.549590 1440600 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I1222 00:22:18.549604 1440600 command_runner.go:130] >         },
	I1222 00:22:18.549612 1440600 command_runner.go:130] >         "IFName": "lo"
	I1222 00:22:18.549615 1440600 command_runner.go:130] >       }
	I1222 00:22:18.549619 1440600 command_runner.go:130] >     ],
	I1222 00:22:18.549626 1440600 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I1222 00:22:18.549636 1440600 command_runner.go:130] >     "PluginDirs": [
	I1222 00:22:18.549640 1440600 command_runner.go:130] >       "/opt/cni/bin"
	I1222 00:22:18.549643 1440600 command_runner.go:130] >     ],
	I1222 00:22:18.549648 1440600 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I1222 00:22:18.549656 1440600 command_runner.go:130] >     "Prefix": "eth"
	I1222 00:22:18.549667 1440600 command_runner.go:130] >   },
	I1222 00:22:18.549674 1440600 command_runner.go:130] >   "config": {
	I1222 00:22:18.549678 1440600 command_runner.go:130] >     "cdiSpecDirs": [
	I1222 00:22:18.549682 1440600 command_runner.go:130] >       "/etc/cdi",
	I1222 00:22:18.549687 1440600 command_runner.go:130] >       "/var/run/cdi"
	I1222 00:22:18.549691 1440600 command_runner.go:130] >     ],
	I1222 00:22:18.549695 1440600 command_runner.go:130] >     "cni": {
	I1222 00:22:18.549698 1440600 command_runner.go:130] >       "binDir": "",
	I1222 00:22:18.549702 1440600 command_runner.go:130] >       "binDirs": [
	I1222 00:22:18.549706 1440600 command_runner.go:130] >         "/opt/cni/bin"
	I1222 00:22:18.549709 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.549713 1440600 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I1222 00:22:18.549717 1440600 command_runner.go:130] >       "confTemplate": "",
	I1222 00:22:18.549720 1440600 command_runner.go:130] >       "ipPref": "",
	I1222 00:22:18.549728 1440600 command_runner.go:130] >       "maxConfNum": 1,
	I1222 00:22:18.549732 1440600 command_runner.go:130] >       "setupSerially": false,
	I1222 00:22:18.549739 1440600 command_runner.go:130] >       "useInternalLoopback": false
	I1222 00:22:18.549748 1440600 command_runner.go:130] >     },
	I1222 00:22:18.549754 1440600 command_runner.go:130] >     "containerd": {
	I1222 00:22:18.549759 1440600 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I1222 00:22:18.549768 1440600 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I1222 00:22:18.549773 1440600 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false,
	I1222 00:22:18.549777 1440600 command_runner.go:130] >       "runtimes": {
	I1222 00:22:18.549781 1440600 command_runner.go:130] >         "runc": {
	I1222 00:22:18.549786 1440600 command_runner.go:130] >           "ContainerAnnotations": null,
	I1222 00:22:18.549795 1440600 command_runner.go:130] >           "PodAnnotations": null,
	I1222 00:22:18.549799 1440600 command_runner.go:130] >           "baseRuntimeSpec": "",
	I1222 00:22:18.549803 1440600 command_runner.go:130] >           "cgroupWritable": false,
	I1222 00:22:18.549808 1440600 command_runner.go:130] >           "cniConfDir": "",
	I1222 00:22:18.549816 1440600 command_runner.go:130] >           "cniMaxConfNum": 0,
	I1222 00:22:18.549825 1440600 command_runner.go:130] >           "io_type": "",
	I1222 00:22:18.549829 1440600 command_runner.go:130] >           "options": {
	I1222 00:22:18.549834 1440600 command_runner.go:130] >             "BinaryName": "",
	I1222 00:22:18.549841 1440600 command_runner.go:130] >             "CriuImagePath": "",
	I1222 00:22:18.549847 1440600 command_runner.go:130] >             "CriuWorkPath": "",
	I1222 00:22:18.549851 1440600 command_runner.go:130] >             "IoGid": 0,
	I1222 00:22:18.549860 1440600 command_runner.go:130] >             "IoUid": 0,
	I1222 00:22:18.549864 1440600 command_runner.go:130] >             "NoNewKeyring": false,
	I1222 00:22:18.549869 1440600 command_runner.go:130] >             "Root": "",
	I1222 00:22:18.549874 1440600 command_runner.go:130] >             "ShimCgroup": "",
	I1222 00:22:18.549883 1440600 command_runner.go:130] >             "SystemdCgroup": false
	I1222 00:22:18.549890 1440600 command_runner.go:130] >           },
	I1222 00:22:18.549896 1440600 command_runner.go:130] >           "privileged_without_host_devices": false,
	I1222 00:22:18.549907 1440600 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I1222 00:22:18.549911 1440600 command_runner.go:130] >           "runtimePath": "",
	I1222 00:22:18.549916 1440600 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I1222 00:22:18.549920 1440600 command_runner.go:130] >           "sandboxer": "podsandbox",
	I1222 00:22:18.549924 1440600 command_runner.go:130] >           "snapshotter": ""
	I1222 00:22:18.549928 1440600 command_runner.go:130] >         }
	I1222 00:22:18.549931 1440600 command_runner.go:130] >       }
	I1222 00:22:18.549934 1440600 command_runner.go:130] >     },
	I1222 00:22:18.549944 1440600 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I1222 00:22:18.549953 1440600 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I1222 00:22:18.549961 1440600 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I1222 00:22:18.549965 1440600 command_runner.go:130] >     "disableApparmor": false,
	I1222 00:22:18.549970 1440600 command_runner.go:130] >     "disableHugetlbController": true,
	I1222 00:22:18.549978 1440600 command_runner.go:130] >     "disableProcMount": false,
	I1222 00:22:18.549983 1440600 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I1222 00:22:18.549987 1440600 command_runner.go:130] >     "enableCDI": true,
	I1222 00:22:18.549991 1440600 command_runner.go:130] >     "enableSelinux": false,
	I1222 00:22:18.549996 1440600 command_runner.go:130] >     "enableUnprivilegedICMP": true,
	I1222 00:22:18.550004 1440600 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I1222 00:22:18.550010 1440600 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I1222 00:22:18.550015 1440600 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I1222 00:22:18.550019 1440600 command_runner.go:130] >     "maxContainerLogLineSize": 16384,
	I1222 00:22:18.550024 1440600 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I1222 00:22:18.550035 1440600 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I1222 00:22:18.550046 1440600 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I1222 00:22:18.550051 1440600 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I1222 00:22:18.550059 1440600 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri",
	I1222 00:22:18.550068 1440600 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I1222 00:22:18.550072 1440600 command_runner.go:130] >     "unsetSeccompProfile": ""
	I1222 00:22:18.550165 1440600 command_runner.go:130] >   },
	I1222 00:22:18.550176 1440600 command_runner.go:130] >   "features": {
	I1222 00:22:18.550180 1440600 command_runner.go:130] >     "supplemental_groups_policy": true
	I1222 00:22:18.550184 1440600 command_runner.go:130] >   },
	I1222 00:22:18.550188 1440600 command_runner.go:130] >   "golang": "go1.24.11",
	I1222 00:22:18.550201 1440600 command_runner.go:130] >   "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1222 00:22:18.550222 1440600 command_runner.go:130] >   "lastCNILoadStatus.default": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1222 00:22:18.550231 1440600 command_runner.go:130] >   "runtimeHandlers": [
	I1222 00:22:18.550234 1440600 command_runner.go:130] >     {
	I1222 00:22:18.550238 1440600 command_runner.go:130] >       "features": {
	I1222 00:22:18.550243 1440600 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1222 00:22:18.550253 1440600 command_runner.go:130] >         "user_namespaces": true
	I1222 00:22:18.550257 1440600 command_runner.go:130] >       }
	I1222 00:22:18.550260 1440600 command_runner.go:130] >     },
	I1222 00:22:18.550264 1440600 command_runner.go:130] >     {
	I1222 00:22:18.550268 1440600 command_runner.go:130] >       "features": {
	I1222 00:22:18.550272 1440600 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1222 00:22:18.550277 1440600 command_runner.go:130] >         "user_namespaces": true
	I1222 00:22:18.550282 1440600 command_runner.go:130] >       },
	I1222 00:22:18.550286 1440600 command_runner.go:130] >       "name": "runc"
	I1222 00:22:18.550290 1440600 command_runner.go:130] >     }
	I1222 00:22:18.550293 1440600 command_runner.go:130] >   ],
	I1222 00:22:18.550296 1440600 command_runner.go:130] >   "status": {
	I1222 00:22:18.550302 1440600 command_runner.go:130] >     "conditions": [
	I1222 00:22:18.550305 1440600 command_runner.go:130] >       {
	I1222 00:22:18.550315 1440600 command_runner.go:130] >         "message": "",
	I1222 00:22:18.550319 1440600 command_runner.go:130] >         "reason": "",
	I1222 00:22:18.550327 1440600 command_runner.go:130] >         "status": true,
	I1222 00:22:18.550337 1440600 command_runner.go:130] >         "type": "RuntimeReady"
	I1222 00:22:18.550341 1440600 command_runner.go:130] >       },
	I1222 00:22:18.550344 1440600 command_runner.go:130] >       {
	I1222 00:22:18.550352 1440600 command_runner.go:130] >         "message": "Network plugin returns error: cni plugin not initialized",
	I1222 00:22:18.550360 1440600 command_runner.go:130] >         "reason": "NetworkPluginNotReady",
	I1222 00:22:18.550365 1440600 command_runner.go:130] >         "status": false,
	I1222 00:22:18.550369 1440600 command_runner.go:130] >         "type": "NetworkReady"
	I1222 00:22:18.550373 1440600 command_runner.go:130] >       },
	I1222 00:22:18.550375 1440600 command_runner.go:130] >       {
	I1222 00:22:18.550400 1440600 command_runner.go:130] >         "message": "{\"io.containerd.deprecation/cgroup-v1\":\"The support for cgroup v1 is deprecated since containerd v2.2 and will be removed by no later than May 2029. Upgrade the host to use cgroup v2.\"}",
	I1222 00:22:18.550411 1440600 command_runner.go:130] >         "reason": "ContainerdHasDeprecationWarnings",
	I1222 00:22:18.550417 1440600 command_runner.go:130] >         "status": false,
	I1222 00:22:18.550423 1440600 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings"
	I1222 00:22:18.550427 1440600 command_runner.go:130] >       }
	I1222 00:22:18.550430 1440600 command_runner.go:130] >     ]
	I1222 00:22:18.550433 1440600 command_runner.go:130] >   }
	I1222 00:22:18.550437 1440600 command_runner.go:130] > }
	I1222 00:22:18.553215 1440600 cni.go:84] Creating CNI manager for ""
	I1222 00:22:18.553243 1440600 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1222 00:22:18.553264 1440600 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1222 00:22:18.553287 1440600 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-973657 NodeName:functional-973657 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1222 00:22:18.553412 1440600 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-973657"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1222 00:22:18.553487 1440600 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1222 00:22:18.562348 1440600 command_runner.go:130] > kubeadm
	I1222 00:22:18.562392 1440600 command_runner.go:130] > kubectl
	I1222 00:22:18.562397 1440600 command_runner.go:130] > kubelet
	I1222 00:22:18.563648 1440600 binaries.go:51] Found k8s binaries, skipping transfer
	I1222 00:22:18.563729 1440600 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1222 00:22:18.571505 1440600 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1222 00:22:18.584676 1440600 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1222 00:22:18.597236 1440600 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1222 00:22:18.610841 1440600 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1222 00:22:18.614244 1440600 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1222 00:22:18.614541 1440600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 00:22:18.726610 1440600 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 00:22:19.239399 1440600 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657 for IP: 192.168.49.2
	I1222 00:22:19.239420 1440600 certs.go:195] generating shared ca certs ...
	I1222 00:22:19.239437 1440600 certs.go:227] acquiring lock for ca certs: {Name:mk4e1172e73c8d9b926824a39d7e920772302ed7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:22:19.239601 1440600 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.key
	I1222 00:22:19.239659 1440600 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.key
	I1222 00:22:19.239667 1440600 certs.go:257] generating profile certs ...
	I1222 00:22:19.239794 1440600 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/client.key
	I1222 00:22:19.239853 1440600 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/apiserver.key.ec70d081
	I1222 00:22:19.239904 1440600 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/proxy-client.key
	I1222 00:22:19.239913 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1222 00:22:19.239940 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1222 00:22:19.239954 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1222 00:22:19.239964 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1222 00:22:19.239974 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1222 00:22:19.239986 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1222 00:22:19.239996 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1222 00:22:19.240015 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1222 00:22:19.240069 1440600 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/1396864.pem (1338 bytes)
	W1222 00:22:19.240100 1440600 certs.go:480] ignoring /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/1396864_empty.pem, impossibly tiny 0 bytes
	I1222 00:22:19.240108 1440600 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca-key.pem (1675 bytes)
	I1222 00:22:19.240138 1440600 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem (1082 bytes)
	I1222 00:22:19.240165 1440600 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem (1123 bytes)
	I1222 00:22:19.240227 1440600 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/key.pem (1679 bytes)
	I1222 00:22:19.240279 1440600 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem (1708 bytes)
	I1222 00:22:19.240316 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem -> /usr/share/ca-certificates/13968642.pem
	I1222 00:22:19.240338 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:22:19.240354 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/1396864.pem -> /usr/share/ca-certificates/1396864.pem
	I1222 00:22:19.240935 1440600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1222 00:22:19.264800 1440600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1222 00:22:19.285797 1440600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1222 00:22:19.306670 1440600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1222 00:22:19.326432 1440600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1222 00:22:19.345177 1440600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1222 00:22:19.365354 1440600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1222 00:22:19.385285 1440600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1222 00:22:19.406674 1440600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem --> /usr/share/ca-certificates/13968642.pem (1708 bytes)
	I1222 00:22:19.425094 1440600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1222 00:22:19.443464 1440600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/1396864.pem --> /usr/share/ca-certificates/1396864.pem (1338 bytes)
	I1222 00:22:19.461417 1440600 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1222 00:22:19.474356 1440600 ssh_runner.go:195] Run: openssl version
	I1222 00:22:19.480426 1440600 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1222 00:22:19.480764 1440600 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/13968642.pem
	I1222 00:22:19.488508 1440600 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/13968642.pem /etc/ssl/certs/13968642.pem
	I1222 00:22:19.496491 1440600 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13968642.pem
	I1222 00:22:19.500580 1440600 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 22 00:13 /usr/share/ca-certificates/13968642.pem
	I1222 00:22:19.500632 1440600 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 00:13 /usr/share/ca-certificates/13968642.pem
	I1222 00:22:19.500692 1440600 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13968642.pem
	I1222 00:22:19.542795 1440600 command_runner.go:130] > 3ec20f2e
	I1222 00:22:19.543311 1440600 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1222 00:22:19.550778 1440600 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:22:19.558196 1440600 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1222 00:22:19.566111 1440600 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:22:19.570217 1440600 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 22 00:04 /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:22:19.570294 1440600 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 00:04 /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:22:19.570384 1440600 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:22:19.611673 1440600 command_runner.go:130] > b5213941
	I1222 00:22:19.612225 1440600 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1222 00:22:19.620704 1440600 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1396864.pem
	I1222 00:22:19.628264 1440600 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1396864.pem /etc/ssl/certs/1396864.pem
	I1222 00:22:19.635997 1440600 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1396864.pem
	I1222 00:22:19.639846 1440600 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 22 00:13 /usr/share/ca-certificates/1396864.pem
	I1222 00:22:19.640210 1440600 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 00:13 /usr/share/ca-certificates/1396864.pem
	I1222 00:22:19.640329 1440600 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1396864.pem
	I1222 00:22:19.681144 1440600 command_runner.go:130] > 51391683
	I1222 00:22:19.681670 1440600 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1222 00:22:19.689290 1440600 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 00:22:19.693035 1440600 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 00:22:19.693063 1440600 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1222 00:22:19.693070 1440600 command_runner.go:130] > Device: 259,1	Inode: 3898609     Links: 1
	I1222 00:22:19.693078 1440600 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1222 00:22:19.693115 1440600 command_runner.go:130] > Access: 2025-12-22 00:18:12.483760857 +0000
	I1222 00:22:19.693127 1440600 command_runner.go:130] > Modify: 2025-12-22 00:14:07.469855514 +0000
	I1222 00:22:19.693132 1440600 command_runner.go:130] > Change: 2025-12-22 00:14:07.469855514 +0000
	I1222 00:22:19.693137 1440600 command_runner.go:130] >  Birth: 2025-12-22 00:14:07.469855514 +0000
	I1222 00:22:19.693272 1440600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1222 00:22:19.733914 1440600 command_runner.go:130] > Certificate will not expire
	I1222 00:22:19.734424 1440600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1222 00:22:19.775247 1440600 command_runner.go:130] > Certificate will not expire
	I1222 00:22:19.775751 1440600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1222 00:22:19.816615 1440600 command_runner.go:130] > Certificate will not expire
	I1222 00:22:19.817124 1440600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1222 00:22:19.858237 1440600 command_runner.go:130] > Certificate will not expire
	I1222 00:22:19.858742 1440600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1222 00:22:19.899966 1440600 command_runner.go:130] > Certificate will not expire
	I1222 00:22:19.900073 1440600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1222 00:22:19.941050 1440600 command_runner.go:130] > Certificate will not expire
	I1222 00:22:19.941558 1440600 kubeadm.go:401] StartCluster: {Name:functional-973657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-973657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 00:22:19.941671 1440600 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1222 00:22:19.941755 1440600 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 00:22:19.969312 1440600 cri.go:96] found id: ""
	I1222 00:22:19.969401 1440600 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1222 00:22:19.976791 1440600 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1222 00:22:19.976817 1440600 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1222 00:22:19.976825 1440600 command_runner.go:130] > /var/lib/minikube/etcd:
	I1222 00:22:19.977852 1440600 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1222 00:22:19.977869 1440600 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1222 00:22:19.977970 1440600 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1222 00:22:19.987953 1440600 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1222 00:22:19.988422 1440600 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-973657" does not appear in /home/jenkins/minikube-integration/22179-1395000/kubeconfig
	I1222 00:22:19.988584 1440600 kubeconfig.go:62] /home/jenkins/minikube-integration/22179-1395000/kubeconfig needs updating (will repair): [kubeconfig missing "functional-973657" cluster setting kubeconfig missing "functional-973657" context setting]
	I1222 00:22:19.988906 1440600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/kubeconfig: {Name:mk08a9ce05fb3d2aec66c2d4776a38c1f64c42df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:22:19.989373 1440600 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22179-1395000/kubeconfig
	I1222 00:22:19.989570 1440600 kapi.go:59] client config for functional-973657: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/client.crt", KeyFile:"/home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/client.key", CAFile:"/home/jenkins/minikube-integration/22179-1395000/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2001100), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1222 00:22:19.990226 1440600 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1222 00:22:19.990386 1440600 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1222 00:22:19.990501 1440600 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1222 00:22:19.990531 1440600 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1222 00:22:19.990563 1440600 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1222 00:22:19.990584 1440600 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1222 00:22:19.990915 1440600 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1222 00:22:19.999837 1440600 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1222 00:22:19.999916 1440600 kubeadm.go:602] duration metric: took 22.040118ms to restartPrimaryControlPlane
	I1222 00:22:19.999943 1440600 kubeadm.go:403] duration metric: took 58.401328ms to StartCluster
	I1222 00:22:19.999973 1440600 settings.go:142] acquiring lock: {Name:mk10fbc51e5384d725fd46ab36048358281cb87b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:22:20.000060 1440600 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22179-1395000/kubeconfig
	I1222 00:22:20.000818 1440600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/kubeconfig: {Name:mk08a9ce05fb3d2aec66c2d4776a38c1f64c42df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:22:20.001160 1440600 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1222 00:22:20.001573 1440600 config.go:182] Loaded profile config "functional-973657": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1222 00:22:20.001632 1440600 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1222 00:22:20.001706 1440600 addons.go:70] Setting storage-provisioner=true in profile "functional-973657"
	I1222 00:22:20.001719 1440600 addons.go:239] Setting addon storage-provisioner=true in "functional-973657"
	I1222 00:22:20.001742 1440600 host.go:66] Checking if "functional-973657" exists ...
	I1222 00:22:20.002272 1440600 cli_runner.go:164] Run: docker container inspect functional-973657 --format={{.State.Status}}
	I1222 00:22:20.005335 1440600 addons.go:70] Setting default-storageclass=true in profile "functional-973657"
	I1222 00:22:20.005371 1440600 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-973657"
	I1222 00:22:20.005777 1440600 cli_runner.go:164] Run: docker container inspect functional-973657 --format={{.State.Status}}
	I1222 00:22:20.009418 1440600 out.go:179] * Verifying Kubernetes components...
	I1222 00:22:20.018228 1440600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 00:22:20.049014 1440600 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1222 00:22:20.054188 1440600 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:22:20.054214 1440600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1222 00:22:20.054285 1440600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
	I1222 00:22:20.057022 1440600 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22179-1395000/kubeconfig
	I1222 00:22:20.057199 1440600 kapi.go:59] client config for functional-973657: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/client.crt", KeyFile:"/home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/client.key", CAFile:"/home/jenkins/minikube-integration/22179-1395000/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2001100), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1222 00:22:20.057484 1440600 addons.go:239] Setting addon default-storageclass=true in "functional-973657"
	I1222 00:22:20.057515 1440600 host.go:66] Checking if "functional-973657" exists ...
	I1222 00:22:20.057932 1440600 cli_runner.go:164] Run: docker container inspect functional-973657 --format={{.State.Status}}
	I1222 00:22:20.116105 1440600 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1222 00:22:20.116126 1440600 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1222 00:22:20.116211 1440600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
	I1222 00:22:20.118476 1440600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38390 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/functional-973657/id_rsa Username:docker}
	I1222 00:22:20.150964 1440600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38390 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/functional-973657/id_rsa Username:docker}
	I1222 00:22:20.230950 1440600 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 00:22:20.246813 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:22:20.269038 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:22:20.989713 1440600 node_ready.go:35] waiting up to 6m0s for node "functional-973657" to be "Ready" ...
	I1222 00:22:20.989868 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:20.989910 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:20.989956 1440600 retry.go:84] will retry after 300ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:20.990019 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:20.990037 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:20.990158 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:20.990237 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:20.990539 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:21.220129 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:22:21.281805 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:21.285548 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:21.328766 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:22:21.389895 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:21.389951 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:21.490214 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:21.490305 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:21.490671 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:21.747162 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:22:21.762982 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:22:21.851794 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:21.851892 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:21.874934 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:21.874990 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:21.990352 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:21.990483 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:21.990846 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:22.169304 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:22:22.227981 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:22.231304 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 00:22:22.232314 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:22.293066 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:22.293113 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:22.490400 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:22.490500 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:22.490834 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:22.906334 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:22:22.975672 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:22.975713 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:22.990847 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:22.990933 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:22.991200 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:22:22.991243 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:22:23.106669 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:22:23.165342 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:23.165389 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:23.490828 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:23.490919 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:23.491242 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:23.690784 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:22:23.756600 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:23.760540 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:23.989979 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:23.990075 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:23.990401 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:24.489993 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:24.490100 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:24.490454 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:24.698734 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:22:24.769684 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:24.773516 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:24.989931 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:24.990002 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:24.990372 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:22:24.991642 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:22:25.485320 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:22:25.490209 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:25.490301 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:25.490614 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:25.576354 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:25.576402 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:25.989992 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:25.990101 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:25.990409 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:26.023839 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:22:26.088004 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:26.088050 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:26.490597 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:26.490669 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:26.491019 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:26.990635 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:26.990716 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:26.991074 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:27.490758 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:27.490828 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:27.491160 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:22:27.491213 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:22:27.990564 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:27.990642 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:27.991013 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:28.490658 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:28.490747 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:28.491022 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:28.831344 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:22:28.887027 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:28.890561 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:28.990850 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:28.990934 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:28.991236 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:29.310761 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:22:29.372391 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:29.372466 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:29.490719 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:29.490793 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:29.491132 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:29.989857 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:29.989931 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:29.990237 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:22:29.990280 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:22:30.489971 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:30.490047 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:30.490418 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:30.990341 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:30.990414 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:30.990750 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:31.490503 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:31.490609 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:31.490891 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:31.990673 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:31.990771 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:31.991094 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:22:31.991143 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:22:32.490784 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:32.490857 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:32.491147 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:32.990889 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:32.990957 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:32.991275 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:33.490908 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:33.490983 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:33.491308 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:33.989973 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:33.990048 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:33.990429 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:34.489922 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:34.490003 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:34.490315 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:22:34.490363 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:22:34.729902 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:22:34.785155 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:34.788865 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:34.788903 1440600 retry.go:84] will retry after 5s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:34.990007 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:34.990103 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:34.990461 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:35.490036 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:35.490130 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:35.490475 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:35.603941 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:22:35.664634 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:35.664674 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:35.990278 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:35.990353 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:35.990620 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:36.489989 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:36.490117 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:36.490457 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:22:36.490508 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:22:36.990183 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:36.990309 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:36.990632 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:37.490175 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:37.490265 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:37.490582 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:37.990291 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:37.990369 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:37.990755 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:38.490593 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:38.490669 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:38.491023 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:22:38.491077 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:22:38.990843 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:38.990915 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:38.991302 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:39.489984 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:39.490057 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:39.490378 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:39.827913 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:22:39.886956 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:39.887007 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:39.887042 1440600 retry.go:84] will retry after 5.9s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:39.990214 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:39.990290 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:39.990608 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:40.490186 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:40.490272 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:40.490611 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:40.989992 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:40.990105 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:40.990426 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:22:40.990478 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:22:41.489998 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:41.490103 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:41.490430 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:41.990002 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:41.990092 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:41.990399 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:42.490105 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:42.490198 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:42.490519 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:42.989979 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:42.990061 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:42.990415 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:43.490003 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:43.490101 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:43.490417 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:22:43.490468 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:22:43.989976 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:43.990053 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:43.990428 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:44.490009 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:44.490103 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:44.490462 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:44.689826 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:22:44.747699 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:44.751320 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:44.751358 1440600 retry.go:84] will retry after 11.8s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:44.990747 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:44.990828 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:44.991101 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:45.490873 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:45.491004 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:45.491354 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:22:45.491411 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:22:45.742662 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:22:45.802582 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:45.802622 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:45.990273 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:45.990345 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:45.990615 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:46.489977 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:46.490053 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:46.490376 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:46.990196 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:46.990269 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:46.990588 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:47.490213 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:47.490295 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:47.490626 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:47.990032 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:47.990136 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:47.990574 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:22:47.990636 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:22:48.490291 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:48.490369 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:48.490704 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:48.990450 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:48.990547 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:48.990893 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:49.490743 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:49.490839 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:49.491164 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:49.989920 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:49.990005 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:49.990408 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:50.490031 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:50.490126 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:50.490389 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:22:50.490443 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:22:50.990489 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:50.990566 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:50.990936 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:51.490631 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:51.490764 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:51.491053 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:51.989876 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:51.989962 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:51.990268 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:52.489968 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:52.490042 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:52.490371 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:52.990110 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:52.990196 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:52.990515 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:22:52.990572 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:22:53.490196 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:53.490268 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:53.490563 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:53.990000 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:53.990092 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:53.990415 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:54.490001 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:54.490098 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:54.490420 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:54.989928 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:54.990024 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:54.990327 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:55.234826 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:22:55.295545 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:55.295592 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:55.295617 1440600 retry.go:84] will retry after 23.4s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:55.490907 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:55.490991 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:55.491326 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:22:55.491397 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:22:55.990270 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:55.990351 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:55.990713 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:56.490418 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:56.490484 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:56.490747 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:56.590202 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:22:56.649053 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:56.649099 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:56.990592 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:56.990671 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:56.991011 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:57.490852 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:57.490928 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:57.491243 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:57.989908 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:57.990004 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:57.990332 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:22:57.990391 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:22:58.489983 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:58.490098 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:58.490492 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:58.990013 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:58.990106 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:58.990483 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:59.490202 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:59.490276 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:59.490574 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:59.989956 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:59.990059 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:59.990371 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:22:59.990417 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:00.489992 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:00.490099 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:00.490433 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:00.990353 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:00.990422 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:00.990690 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:01.490534 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:01.490615 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:01.490968 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:01.990810 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:01.990893 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:01.991247 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:01.991307 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:02.489934 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:02.490020 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:02.490365 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:02.989981 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:02.990056 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:02.990424 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:03.490057 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:03.490160 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:03.490510 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:03.990186 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:03.990261 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:03.990567 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:04.489961 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:04.490056 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:04.490388 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:04.490445 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:04.989990 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:04.990063 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:04.990444 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:05.490618 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:05.490690 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:05.491034 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:05.990715 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:05.990804 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:05.991174 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:06.490848 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:06.490927 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:06.491264 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:06.491323 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:06.989959 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:06.990038 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:06.990349 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:07.490001 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:07.490094 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:07.490420 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:07.990138 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:07.990214 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:07.990557 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:08.490207 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:08.490297 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:08.490641 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:08.990354 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:08.990451 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:08.990812 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:08.990863 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:09.490648 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:09.490727 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:09.491063 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:09.990673 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:09.990741 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:09.991016 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:10.490839 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:10.490917 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:10.491255 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:10.990113 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:10.990187 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:10.990542 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:11.490194 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:11.490275 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:11.490617 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:11.490670 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:11.989985 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:11.990065 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:11.990445 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:12.490175 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:12.490250 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:12.490589 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:12.990181 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:12.990252 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:12.990513 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:13.489972 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:13.490070 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:13.490414 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:13.989959 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:13.990036 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:13.990393 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:13.990452 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:14.490128 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:14.490203 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:14.490524 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:14.989972 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:14.990055 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:14.990413 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:15.490003 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:15.490098 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:15.490439 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:15.990388 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:15.990464 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:15.990804 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:15.990869 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:16.088253 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:23:16.150952 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:23:16.151001 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:23:16.151025 1440600 retry.go:84] will retry after 12s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:23:16.490464 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:16.490538 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:16.490881 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:16.990721 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:16.990797 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:16.991127 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:17.489899 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:17.489969 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:17.490299 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:17.990075 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:17.990174 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:17.990513 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:18.489977 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:18.490103 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:18.490446 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:18.490500 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:18.654775 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:23:18.717545 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:23:18.717590 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:23:18.989890 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:18.989961 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:18.990331 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:19.490056 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:19.490166 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:19.490474 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:19.989986 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:19.990059 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:19.990413 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:20.489971 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:20.490043 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:20.490401 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:20.990442 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:20.990522 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:20.990905 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:20.990965 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:21.490561 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:21.490647 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:21.491034 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:21.990814 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:21.990880 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:21.991151 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:22.489859 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:22.489933 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:22.490316 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:22.989907 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:22.990010 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:22.990373 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:23.489920 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:23.489988 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:23.490275 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:23.490318 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:23.990022 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:23.990126 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:23.990454 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:24.489985 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:24.490058 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:24.490398 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:24.990113 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:24.990189 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:24.990527 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:25.489940 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:25.490018 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:25.490368 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:25.490426 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:25.989959 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:25.990037 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:25.990391 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:26.489963 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:26.490033 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:26.490358 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:26.989983 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:26.990062 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:26.990471 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:27.490073 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:27.490176 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:27.490476 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:27.490523 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:27.990208 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:27.990284 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:27.990581 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:28.161122 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:23:28.220514 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:23:28.224335 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:23:28.224393 1440600 retry.go:84] will retry after 41.4s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:23:28.490932 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:28.491004 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:28.491336 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:28.989906 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:28.989984 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:28.990321 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:29.490024 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:29.490113 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:29.490458 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:29.989986 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:29.990094 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:29.990474 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:29.990557 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:30.490047 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:30.490143 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:30.490458 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:30.990259 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:30.990329 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:30.990655 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:31.490002 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:31.490100 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:31.490411 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:31.990014 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:31.990115 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:31.990469 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:32.490018 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:32.490108 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:32.490375 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:32.490418 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:32.990098 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:32.990177 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:32.990500 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:33.490003 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:33.490108 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:33.490466 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:33.990154 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:33.990234 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:33.990566 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:34.490002 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:34.490090 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:34.490440 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:34.490501 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:34.990013 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:34.990107 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:34.990397 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:35.490066 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:35.490157 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:35.490467 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:35.990431 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:35.990505 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:35.990824 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:36.490453 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:36.490528 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:36.490835 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:36.490884 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:36.990603 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:36.990678 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:36.990954 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:37.490735 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:37.490807 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:37.491181 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:37.990857 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:37.990929 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:37.991230 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:38.489948 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:38.490019 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:38.490349 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:38.990008 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:38.990105 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:38.990436 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:38.990495 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:39.489987 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:39.490063 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:39.490393 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:39.989944 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:39.990040 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:39.990363 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:40.489956 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:40.490045 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:40.490412 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:40.990475 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:40.990549 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:40.990889 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:40.990950 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:41.490672 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:41.490741 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:41.491008 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:41.990780 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:41.990856 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:41.991209 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:42.490871 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:42.490954 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:42.491340 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:42.990007 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:42.990097 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:42.990404 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:43.489956 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:43.490032 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:43.490391 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:43.490443 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:43.989997 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:43.990069 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:43.990424 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:44.490113 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:44.490184 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:44.490504 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:44.989987 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:44.990072 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:44.990468 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:45.490042 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:45.490139 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:45.490488 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:45.490544 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:45.990486 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:45.990563 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:45.990841 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:46.490648 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:46.490719 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:46.491036 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:46.990855 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:46.990935 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:46.991321 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:47.489981 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:47.490047 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:47.490334 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:47.990019 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:47.990115 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:47.990452 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:47.990507 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:48.490178 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:48.490257 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:48.490596 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:48.990176 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:48.990258 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:48.990544 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:49.490811 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:49.490901 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:49.491279 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:49.989874 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:49.989946 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:49.990300 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:50.489924 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:50.490000 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:50.490343 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:50.490397 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:50.990356 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:50.990437 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:50.990752 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:51.490553 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:51.490629 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:51.490975 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:51.990784 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:51.990866 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:51.991140 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:52.489871 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:52.489953 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:52.490301 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:52.989984 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:52.990058 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:52.990419 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:52.990479 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:53.490129 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:53.490202 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:53.490518 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:53.990187 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:53.990262 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:53.990609 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:54.489980 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:54.490055 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:54.490439 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:54.989995 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:54.990072 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:54.990372 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:55.490063 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:55.490153 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:55.490516 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:55.490572 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:55.990452 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:55.990534 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:55.990878 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:56.490649 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:56.490717 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:56.490982 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:56.990754 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:56.990838 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:56.991192 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:57.489902 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:57.489983 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:57.490337 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:57.990010 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:57.990101 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:57.990401 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:57.990458 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:58.490135 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:58.490219 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:58.490572 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:58.990291 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:58.990363 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:58.990713 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:59.490474 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:59.490546 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:59.490809 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:59.990637 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:59.990713 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:59.991064 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:59.991122 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:00.490316 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:00.490400 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:00.490862 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:00.990668 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:00.990739 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:00.991087 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:01.490883 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:01.490958 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:01.491270 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:01.989948 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:01.990026 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:01.990325 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:02.490014 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:02.490103 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:02.490477 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:02.490534 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:02.990031 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:02.990131 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:02.990497 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:03.490184 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:03.490262 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:03.490600 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:03.989956 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:03.990032 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:03.990406 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:04.489994 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:04.490072 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:04.490456 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:04.989918 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:04.989996 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:04.990341 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:04.990396 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:05.490050 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:05.490143 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:05.490511 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:05.990488 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:05.990562 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:05.990914 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:06.490753 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:06.490832 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:06.491115 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:06.560484 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:24:06.618784 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:24:06.622419 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:24:06.622526 1440600 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1222 00:24:06.989983 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:06.990064 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:06.990383 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:06.990433 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:07.490157 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:07.490233 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:07.490524 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:07.990185 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:07.990270 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:07.990599 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:08.490022 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:08.490115 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:08.490453 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:08.989981 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:08.990060 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:08.990411 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:08.990466 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:09.490075 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:09.490168 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:09.490514 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:09.609944 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:24:09.674734 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:24:09.674775 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:24:09.674856 1440600 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1222 00:24:09.678207 1440600 out.go:179] * Enabled addons: 
	I1222 00:24:09.681672 1440600 addons.go:530] duration metric: took 1m49.680036347s for enable addons: enabled=[]
	I1222 00:24:09.990006 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:09.990111 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:09.990435 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:10.489989 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:10.490125 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:10.490453 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:10.990344 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:10.990411 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:10.990682 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:10.990727 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:11.490593 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:11.490672 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:11.491056 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:11.990903 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:11.990982 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:11.991278 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:12.489936 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:12.490027 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:12.490339 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:12.990005 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:12.990116 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:12.990431 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:13.489991 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:13.490102 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:13.490441 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:13.490498 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:13.989928 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:13.990002 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:13.990306 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:14.489924 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:14.490000 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:14.490370 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:14.989952 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:14.990029 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:14.990377 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:15.489930 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:15.490007 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:15.495140 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	W1222 00:24:15.495205 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:15.990139 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:15.990214 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:15.990548 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:16.490265 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:16.490341 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:16.490685 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:16.990466 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:16.990534 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:16.990810 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:17.490605 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:17.490678 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:17.491024 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:17.990809 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:17.990888 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:17.991232 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:17.991290 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:18.489966 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:18.490047 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:18.490344 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:18.989974 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:18.990048 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:18.990413 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:19.490152 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:19.490235 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:19.490572 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:19.990193 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:19.990267 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:19.990595 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:20.490013 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:20.490112 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:20.490466 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:20.490529 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:20.990365 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:20.990452 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:20.990874 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:21.490647 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:21.490716 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:21.490974 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:21.990817 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:21.990890 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:21.991258 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:22.489990 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:22.490065 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:22.490412 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:22.989931 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:22.990001 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:22.990291 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:22.990334 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:23.489982 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:23.490072 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:23.490434 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:23.990185 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:23.990260 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:23.990618 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:24.490196 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:24.490263 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:24.490567 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:24.990006 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:24.990100 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:24.990427 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:24.990485 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:25.490302 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:25.490387 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:25.490733 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:25.990623 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:25.990702 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:25.990981 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:26.490787 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:26.490872 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:26.491243 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:26.989906 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:26.989979 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:26.990394 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:27.490103 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:27.490176 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:27.490443 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:27.490482 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:27.989960 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:27.990034 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:27.990413 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:28.490207 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:28.490281 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:28.490622 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:28.990191 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:28.990282 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:28.990631 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:29.490018 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:29.490117 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:29.490494 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:29.490550 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:29.990018 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:29.990122 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:29.990442 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:30.490142 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:30.490217 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:30.490543 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:30.990537 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:30.990608 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:30.990938 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:31.490581 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:31.490653 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:31.490983 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:31.491033 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:31.990778 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:31.990859 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:31.991138 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:32.489907 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:32.489978 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:32.490318 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:32.990007 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:32.990097 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:32.990421 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:33.490061 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:33.490158 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:33.490434 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:33.989970 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:33.990046 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:33.990432 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:33.990495 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:34.490195 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:34.490327 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:34.490668 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:34.990187 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:34.990257 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:34.990639 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:35.490386 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:35.490458 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:35.490812 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:35.990390 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:35.990464 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:35.990796 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:35.990852 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:36.490573 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:36.490651 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:36.490929 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:36.990756 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:36.990829 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:36.991171 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:37.489889 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:37.489967 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:37.490339 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:37.990024 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:37.990114 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:37.990447 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:38.489982 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:38.490058 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:38.490429 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:38.490484 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:38.990183 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:38.990264 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:38.990605 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:39.490191 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:39.490269 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:39.490588 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:39.990227 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:39.990305 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:39.990674 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:40.490443 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:40.490519 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:40.490858 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:40.490915 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:40.990684 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:40.990753 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:40.991032 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:41.490794 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:41.490870 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:41.491216 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:41.989955 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:41.990032 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:41.990370 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:42.489956 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:42.490024 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:42.490310 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:42.989997 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:42.990074 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:42.990458 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:42.990518 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:43.489986 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:43.490062 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:43.490435 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:43.990126 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:43.990202 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:43.990538 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:44.489985 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:44.490061 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:44.490436 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:44.990151 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:44.990227 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:44.990551 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:44.990601 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:45.490191 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:45.490261 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:45.490538 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:45.990641 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:45.990724 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:45.991072 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:46.491707 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:46.491786 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:46.492142 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:46.989879 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:46.989948 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:46.990262 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:47.489938 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:47.490019 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:47.490372 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:47.490436 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:47.989976 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:47.990058 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:47.990414 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:48.490104 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:48.490184 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:48.490506 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:48.989980 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:48.990052 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:48.990405 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:49.490140 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:49.490223 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:49.490576 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:49.490637 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:49.990206 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:49.990276 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:49.990555 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:50.489985 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:50.490065 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:50.490434 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:50.990242 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:50.990326 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:50.990658 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:51.490186 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:51.490262 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:51.490593 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:51.989990 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:51.990069 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:51.990435 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:51.990489 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:52.490183 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:52.490256 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:52.490603 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:52.990255 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:52.990324 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:52.990635 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:53.489964 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:53.490036 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:53.490363 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:53.990001 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:53.990075 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:53.990423 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:54.489924 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:54.489996 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:54.490285 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:54.490333 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:54.989975 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:54.990055 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:54.990468 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:55.490011 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:55.490103 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:55.490421 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:55.990314 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:55.990389 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:55.990657 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:56.490486 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:56.490557 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:56.490869 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:56.490916 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:56.990725 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:56.990802 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:56.991133 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:57.490900 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:57.490974 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:57.491290 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:57.989972 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:57.990051 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:57.990419 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:58.490136 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:58.490217 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:58.490543 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:58.990192 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:58.990263 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:58.990550 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:58.990605 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:59.490266 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:59.490351 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:59.490696 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:59.990527 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:59.990604 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:59.990950 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:00.490941 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:00.491023 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:00.491350 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:00.990417 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:00.990491 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:00.990807 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:00.990855 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:01.490637 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:01.490718 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:01.491102 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:01.990857 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:01.990933 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:01.991226 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:02.489934 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:02.490011 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:02.490376 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:02.990107 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:02.990182 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:02.990528 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:03.490191 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:03.490258 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:03.490603 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:03.490666 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:03.990323 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:03.990398 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:03.990774 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:04.490099 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:04.490181 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:04.490541 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:04.990232 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:04.990304 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:04.990598 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:05.489984 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:05.490070 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:05.490474 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:05.990399 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:05.990480 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:05.990832 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:05.990887 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:06.490635 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:06.490717 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:06.491014 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:06.990839 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:06.990944 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:06.991373 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:07.490014 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:07.490104 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:07.490446 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:07.990134 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:07.990205 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:07.990514 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:08.489962 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:08.490044 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:08.490424 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:08.490484 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:08.990161 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:08.990242 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:08.990563 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:09.490209 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:09.490287 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:09.490623 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:09.989971 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:09.990050 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:09.990419 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:10.490169 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:10.490256 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:10.490641 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:10.490691 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:10.990484 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:10.990556 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:10.990880 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:11.490691 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:11.490770 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:11.491148 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:11.989909 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:11.989998 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:11.990370 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:12.490057 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:12.490147 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:12.490467 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:12.990174 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:12.990252 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:12.990598 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:12.990662 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:13.490189 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:13.490279 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:13.490632 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:13.990187 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:13.990261 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:13.990530 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:14.490218 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:14.490295 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:14.490648 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:14.990225 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:14.990310 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:14.990701 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:14.990757 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:15.490513 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:15.490584 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:15.490919 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:15.990746 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:15.990844 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:15.991183 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:16.489918 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:16.490001 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:16.490332 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:16.989935 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:16.990006 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:16.990305 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:17.489916 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:17.489993 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:17.490353 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:17.490411 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:17.989982 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:17.990067 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:17.990440 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:18.489992 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:18.490059 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:18.490344 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:18.989986 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:18.990062 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:18.990446 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:19.489988 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:19.490067 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:19.490439 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:19.490499 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:19.990000 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:19.990097 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:19.990384 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:20.490061 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:20.490152 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:20.490473 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:20.990446 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:20.990524 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:20.990901 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:21.490562 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:21.490638 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:21.490935 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:21.490983 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:21.990766 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:21.990844 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:21.991203 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:22.489952 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:22.490028 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:22.490383 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:22.989946 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:22.990018 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:22.990395 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:23.490115 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:23.490193 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:23.490519 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:23.990250 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:23.990328 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:23.990658 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:23.990712 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:24.490191 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:24.490259 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:24.490564 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:24.990067 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:24.990171 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:24.990519 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:25.490112 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:25.490189 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:25.490544 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:25.990337 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:25.990408 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:25.990679 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:26.490001 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:26.490100 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:26.490493 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:26.490549 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:26.989985 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:26.990060 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:26.990450 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:27.489933 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:27.490004 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:27.490303 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:27.989990 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:27.990064 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:27.990419 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:28.489996 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:28.490100 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:28.490409 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:28.989926 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:28.990004 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:28.990293 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:28.990335 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:29.489955 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:29.490038 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:29.490419 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:29.989986 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:29.990064 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:29.990428 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:30.490113 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:30.490182 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:30.490466 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:30.990514 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:30.990608 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:30.990968 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:30.991038 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:31.490801 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:31.490876 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:31.491238 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:31.989938 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:31.990010 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:31.990319 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:32.490105 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:32.490200 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:32.490583 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:32.990011 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:32.990107 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:32.990457 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:33.489930 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:33.490001 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:33.490356 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:33.490407 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:33.989968 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:33.990049 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:33.990398 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:34.490108 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:34.490195 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:34.490523 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:34.989925 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:34.990023 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:34.990366 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:35.489971 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:35.490049 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:35.490369 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:35.990125 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:35.990203 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:35.990545 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:35.990599 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:36.490187 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:36.490257 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:36.490569 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:36.990016 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:36.990105 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:36.990430 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:37.490182 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:37.490252 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:37.490574 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:37.990191 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:37.990266 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:37.990607 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:37.990658 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:38.490325 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:38.490406 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:38.490727 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:38.990379 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:38.990457 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:38.990798 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:39.490205 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:39.490279 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:39.490564 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:39.989982 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:39.990073 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:39.990463 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:40.489978 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:40.490060 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:40.490394 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:40.490452 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:40.990349 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:40.990422 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:40.990727 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:41.490014 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:41.490108 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:41.490471 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:41.989961 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:41.990033 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:41.990356 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:42.489960 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:42.490027 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:42.490300 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:42.990007 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:42.990099 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:42.990440 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:42.990502 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:43.490037 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:43.490130 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:43.490432 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:43.989985 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:43.990105 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:43.990442 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:44.489978 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:44.490053 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:44.490424 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:44.990010 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:44.990121 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:44.990473 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:44.990528 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:45.490187 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:45.490258 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:45.490577 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:45.990696 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:45.990768 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:45.991105 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:46.490888 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:46.490961 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:46.491348 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:46.989888 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:46.989967 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:46.990307 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:47.489963 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:47.490035 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:47.490377 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:47.490435 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:47.990002 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:47.990111 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:47.990456 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:48.490130 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:48.490210 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:48.490499 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:48.990197 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:48.990271 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:48.990616 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:49.490301 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:49.490378 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:49.490708 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:49.490767 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:49.990515 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:49.990590 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:49.990888 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:50.490679 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:50.490750 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:50.491091 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:50.990830 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:50.990903 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:50.991524 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:51.490235 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:51.490311 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:51.490637 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:51.990347 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:51.990422 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:51.990726 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:51.990776 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:52.490531 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:52.490608 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:52.490905 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:52.990690 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:52.990761 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:52.991011 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:53.490818 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:53.490896 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:53.491226 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:53.989948 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:53.990031 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:53.990354 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:54.489938 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:54.490015 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:54.490377 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:54.490441 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:54.990011 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:54.990111 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:54.990418 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:55.489958 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:55.490031 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:55.490422 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:55.990385 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:55.990459 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:55.990735 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:56.490568 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:56.490669 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:56.490961 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:56.491006 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:56.990756 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:56.990829 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:56.991170 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:57.489861 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:57.489931 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:57.490260 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:57.989963 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:57.990042 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:57.990407 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:58.490137 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:58.490214 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:58.490541 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:58.990214 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:58.990306 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:58.990624 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:58.990667 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:59.489984 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:59.490062 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:59.490433 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:59.990163 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:59.990236 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:59.990615 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:00.490206 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:00.490289 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:00.490597 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:00.990658 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:00.990731 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:00.991092 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:00.991152 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:01.490884 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:01.490964 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:01.491316 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:01.990041 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:01.990133 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:01.990455 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:02.490830 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:02.490906 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:02.491270 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:02.989994 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:02.990098 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:02.990430 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:03.489931 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:03.490002 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:03.490337 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:03.490390 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:03.990024 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:03.990121 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:03.990467 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:04.490179 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:04.490255 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:04.490608 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:04.990184 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:04.990260 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:04.990581 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:05.489977 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:05.490051 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:05.490398 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:05.490456 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:05.990446 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:05.990523 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:05.990849 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:06.490609 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:06.490678 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:06.490954 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:06.990809 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:06.990889 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:06.991238 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:07.489962 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:07.490037 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:07.490389 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:07.989985 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:07.990051 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:07.990334 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:07.990374 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:08.490043 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:08.490140 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:08.490496 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:08.989967 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:08.990041 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:08.990388 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:09.490116 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:09.490235 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:09.490498 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:09.990254 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:09.990339 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:09.990808 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:09.990886 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:10.490650 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:10.490731 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:10.491042 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:10.990825 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:10.990904 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:10.991173 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:11.489852 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:11.489924 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:11.490252 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:11.989997 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:11.990073 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:11.990444 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:12.490161 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:12.490230 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:12.490508 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:12.490550 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:12.989975 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:12.990050 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:12.990399 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:13.489977 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:13.490061 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:13.490418 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:13.989935 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:13.990009 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:13.990308 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:14.490050 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:14.490144 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:14.490473 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:14.989961 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:14.990046 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:14.990441 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:14.990497 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:15.490170 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:15.490244 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:15.490586 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:15.990615 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:15.990697 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:15.991007 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:16.490796 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:16.490870 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:16.491907 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:16.990669 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:16.990740 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:16.991032 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:16.991078 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:17.490833 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:17.490913 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:17.491260 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:17.989966 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:17.990045 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:17.990373 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:18.489932 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:18.490002 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:18.490301 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:18.989992 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:18.990073 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:18.990433 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:19.490149 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:19.490224 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:19.490593 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:19.490656 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:19.990226 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:19.990300 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:19.990617 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:20.489972 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:20.490047 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:20.490362 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:20.990101 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:20.990177 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:20.990509 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:21.490185 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:21.490264 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:21.490595 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:21.989954 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:21.990032 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:21.990395 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:21.990455 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:22.489962 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:22.490045 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:22.490385 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:22.989930 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:22.990003 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:22.990341 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:23.490027 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:23.490129 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:23.490477 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:23.990190 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:23.990277 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:23.990691 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:23.990747 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:24.490514 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:24.490583 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:24.490927 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:24.990720 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:24.990794 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:24.991140 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:25.490900 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:25.490979 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:25.491379 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:25.990094 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:25.990175 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:25.990521 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:26.490373 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:26.490449 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:26.490797 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:26.490855 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:26.990580 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:26.990656 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:26.991034 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:27.490724 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:27.490790 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:27.491046 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:27.990825 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:27.990904 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:27.991259 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:28.490911 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:28.490985 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:28.491318 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:28.491372 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:28.989928 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:28.990007 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:28.990342 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:29.489980 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:29.490066 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:29.490444 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:29.989982 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:29.990058 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:29.990411 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:30.490133 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:30.490213 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:30.490478 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:30.990413 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:30.990487 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:30.990822 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:30.990881 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:31.490635 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:31.490709 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:31.491051 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:31.990823 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:31.990893 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:31.991165 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:32.490952 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:32.491029 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:32.491373 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:32.989955 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:32.990035 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:32.990378 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:33.489931 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:33.490002 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:33.490316 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:33.490362 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:33.989947 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:33.990021 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:33.990372 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:34.490116 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:34.490197 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:34.490500 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:34.989909 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:34.989977 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:34.990263 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:35.489946 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:35.490026 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:35.490341 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:35.490389 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:35.990283 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:35.990360 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:35.990662 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:36.490184 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:36.490280 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:36.490578 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:36.990001 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:36.990095 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:36.990429 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:37.490145 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:37.490220 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:37.490554 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:37.490609 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:37.990015 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:37.990097 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:37.990389 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:38.490110 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:38.490185 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:38.490492 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:38.990019 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:38.990118 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:38.990456 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:39.490153 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:39.490226 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:39.490496 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:39.990199 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:39.990276 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:39.990657 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:39.990715 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:40.490392 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:40.490536 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:40.490920 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:40.990760 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:40.990841 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:40.991131 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:41.490923 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:41.490995 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:41.491302 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:41.990033 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:41.990133 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:41.990472 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:42.490153 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:42.490221 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:42.490485 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:42.490527 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:42.990002 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:42.990101 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:42.990413 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:43.489980 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:43.490064 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:43.490432 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:43.989938 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:43.990011 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:43.990364 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:44.489959 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:44.490042 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:44.490416 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:44.989995 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:44.990102 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:44.990462 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:44.990519 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:45.490179 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:45.490260 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:45.490574 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:45.990602 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:45.990682 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:45.991037 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:46.490835 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:46.490909 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:46.491279 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:46.989941 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:46.990018 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:46.990385 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:47.489947 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:47.490027 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:47.490374 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:47.490435 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:47.989971 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:47.990053 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:47.990422 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:48.490143 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:48.490233 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:48.490499 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:48.990174 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:48.990256 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:48.990580 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:49.490307 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:49.490383 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:49.490720 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:49.490772 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:49.990521 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:49.990599 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:49.990879 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:50.490635 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:50.490716 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:50.491079 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:50.990724 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:50.990795 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:50.991088 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:51.490793 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:51.490872 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:51.491153 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:51.491198 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:51.989975 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:51.990061 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:51.990446 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:52.489977 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:52.490057 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:52.490428 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:52.990118 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:52.990246 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:52.990561 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:53.489966 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:53.490052 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:53.490415 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:53.989985 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:53.990063 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:53.990421 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:53.990481 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:54.489936 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:54.490008 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:54.490340 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:54.989971 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:54.990045 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:54.990421 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:55.490133 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:55.490213 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:55.490553 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:55.990361 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:55.990433 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:55.990726 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:55.990770 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:56.490522 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:56.490596 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:56.490941 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:56.990624 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:56.990700 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:56.991017 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:57.490621 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:57.490692 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:57.490956 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:57.990832 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:57.990908 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:57.991282 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:57.991347 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:58.489976 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:58.490053 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:58.490386 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:58.989930 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:58.989998 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:58.990284 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:59.489983 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:59.490073 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:59.490464 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:59.990192 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:59.990275 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:59.990617 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:00.490281 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:00.490364 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:00.490677 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:00.490726 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:00.990700 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:00.990777 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:00.991140 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:01.490816 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:01.490903 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:01.491267 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:01.989976 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:01.990054 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:01.990417 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:02.489984 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:02.490060 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:02.490437 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:02.990175 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:02.990253 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:02.990605 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:02.990670 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:03.490193 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:03.490272 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:03.490631 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:03.990322 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:03.990405 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:03.990824 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:04.490523 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:04.490601 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:04.490958 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:04.990688 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:04.990756 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:04.991031 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:04.991073 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:05.490786 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:05.490863 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:05.491193 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:05.990885 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:05.990960 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:05.991336 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:06.489989 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:06.490060 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:06.490367 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:06.990048 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:06.990148 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:06.990505 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:07.490226 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:07.490304 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:07.490653 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:07.490707 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:07.990247 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:07.990320 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:07.990637 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:08.489976 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:08.490052 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:08.490406 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:08.990024 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:08.990129 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:08.990481 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:09.489938 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:09.490010 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:09.490353 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:09.989955 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:09.990030 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:09.990415 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:09.990473 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:10.490149 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:10.490228 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:10.490601 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:10.990609 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:10.990681 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:10.990963 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:11.490826 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:11.490912 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:11.491261 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:11.989991 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:11.990068 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:11.990456 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:11.990514 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:12.489967 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:12.490038 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:12.490323 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:12.989997 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:12.990096 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:12.990418 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:13.489963 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:13.490038 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:13.490407 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:13.990095 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:13.990171 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:13.990492 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:13.990549 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:14.489992 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:14.490067 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:14.490428 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:14.990144 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:14.990224 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:14.990592 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:15.490207 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:15.490277 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:15.490570 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:15.990543 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:15.990628 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:15.991069 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:15.991135 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:16.490872 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:16.490956 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:16.491310 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:16.989967 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:16.990053 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:16.990388 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:17.490123 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:17.490206 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:17.490561 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:17.990295 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:17.990377 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:17.990730 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:18.490453 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:18.490522 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:18.490787 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:18.490828 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:18.990605 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:18.990682 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:18.991041 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:19.490876 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:19.490953 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:19.491342 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:19.990007 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:19.990107 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:19.990413 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:20.489981 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:20.490059 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:20.490407 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:20.990447 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:20.990519 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:20.990864 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:20.990919 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:21.490621 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:21.490700 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:21.490968 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:21.990741 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:21.990818 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:21.991152 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:22.490904 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:22.490981 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:22.491320 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:22.989871 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:22.989939 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:22.990221 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:23.489972 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:23.490045 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:23.490356 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:23.490418 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:23.989980 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:23.990055 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:23.990419 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:24.489968 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:24.490036 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:24.490402 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:24.989977 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:24.990052 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:24.990381 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:25.489961 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:25.490041 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:25.490383 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:25.989925 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:25.989999 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:25.990314 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:25.990363 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:26.490009 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:26.490096 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:26.490426 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:26.990011 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:26.990100 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:26.990405 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:27.490090 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:27.490172 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:27.490501 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:27.989983 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:27.990059 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:27.990431 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:27.990490 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:28.490022 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:28.490121 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:28.490494 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:28.990112 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:28.990185 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:28.990505 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:29.490221 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:29.490292 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:29.490664 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:29.990003 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:29.990074 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:29.990426 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:30.490150 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:30.490225 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:30.490541 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:30.490592 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:30.990489 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:30.990567 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:30.990902 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:31.490527 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:31.490606 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:31.490937 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:31.990701 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:31.990772 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:31.991052 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:32.490780 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:32.490863 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:32.491194 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:32.491251 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:32.989979 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:32.990061 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:32.990468 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:33.489931 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:33.490000 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:33.490333 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:33.989999 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:33.990090 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:33.990434 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:34.490164 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:34.490237 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:34.490525 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:34.990013 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:34.990133 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:34.990419 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:34.990463 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:35.489998 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:35.490072 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:35.490402 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:35.990424 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:35.990500 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:35.990847 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:36.490629 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:36.490699 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:36.491002 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:36.990786 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:36.990862 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:36.991205 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:36.991272 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:37.489933 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:37.490007 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:37.490341 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:37.990047 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:37.990142 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:37.990426 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:38.489979 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:38.490062 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:38.490435 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:38.990029 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:38.990127 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:38.990498 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:39.490187 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:39.490257 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:39.490523 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:39.490565 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:39.989972 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:39.990052 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:39.990405 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:40.490020 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:40.490112 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:40.490458 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:40.990313 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:40.990393 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:40.990738 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:41.490512 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:41.490590 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:41.490933 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:41.490989 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:41.990751 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:41.990828 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:41.991149 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:42.489855 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:42.489927 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:42.490220 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:42.989969 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:42.990051 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:42.990392 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:43.489963 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:43.490039 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:43.490355 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:43.989943 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:43.990021 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:43.990364 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:43.990421 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:44.490068 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:44.490167 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:44.490496 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:44.989975 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:44.990048 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:44.990405 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:45.490098 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:45.490165 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:45.490445 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:45.990499 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:45.990579 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:45.990932 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:45.990988 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:46.490765 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:46.490851 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:46.491199 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:46.989903 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:46.989975 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:46.990267 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:47.489959 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:47.490040 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:47.490410 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:47.990193 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:47.990272 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:47.990633 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:48.490191 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:48.490268 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:48.490557 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:48.490605 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:48.989973 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:48.990054 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:48.990411 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:49.490121 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:49.490203 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:49.490602 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:49.990180 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:49.990256 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:49.990573 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:50.490264 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:50.490334 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:50.490701 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:50.490762 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:50.990600 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:50.990676 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:50.991023 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:51.490816 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:51.490888 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:51.491166 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:51.989883 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:51.989958 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:51.990326 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:52.489958 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:52.490029 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:52.490386 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:52.989930 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:52.990008 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:52.990311 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:52.990362 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:53.489946 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:53.490023 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:53.490371 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:53.989949 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:53.990030 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:53.990375 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:54.490788 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:54.490862 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:54.491123 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:54.990890 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:54.990969 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:54.991274 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:54.991322 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:55.489946 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:55.490034 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:55.490395 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:55.990183 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:55.990253 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:55.990532 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:56.490206 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:56.490281 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:56.490594 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:56.989987 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:56.990060 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:56.990406 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:57.490128 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:57.490205 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:57.490478 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:57.490521 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:57.990207 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:57.990282 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:57.990685 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:58.490512 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:58.490591 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:58.490950 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:58.990719 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:58.990795 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:58.991070 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:59.490852 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:59.490926 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:59.491272 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:59.491325 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:59.989994 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:59.990075 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:59.990477 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:00.491169 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:00.491258 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:00.491580 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:00.990547 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:00.990624 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:00.991006 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:01.490798 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:01.490875 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:01.491244 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:01.989992 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:01.990060 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:01.990372 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:01.990421 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:02.490072 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:02.490171 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:02.490504 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:02.990211 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:02.990296 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:02.990636 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:03.490179 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:03.490250 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:03.490508 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:03.989979 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:03.990056 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:03.990393 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:03.990451 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:04.489988 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:04.490064 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:04.490417 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:04.989923 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:04.990006 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:04.990341 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:05.490018 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:05.490111 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:05.490452 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:05.989994 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:05.990072 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:05.990422 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:05.990476 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:06.490127 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:06.490204 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:06.490473 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:06.989958 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:06.990037 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:06.990380 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:07.489982 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:07.490060 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:07.490431 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:07.990121 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:07.990189 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:07.990482 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:07.990525 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:08.489974 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:08.490066 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:08.490425 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:08.990156 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:08.990234 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:08.990576 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:09.490202 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:09.490271 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:09.490542 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:09.989972 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:09.990045 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:09.990377 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:10.490071 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:10.490165 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:10.490532 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:10.490597 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:10.990314 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:10.990397 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:10.990739 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:11.490506 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:11.490591 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:11.490943 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:11.990748 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:11.990830 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:11.991167 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:12.489852 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:12.489923 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:12.490225 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:12.989970 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:12.990044 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:12.990403 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:12.990470 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:13.489984 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:13.490059 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:13.490416 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:13.990123 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:13.990198 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:13.990462 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:14.489978 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:14.490054 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:14.490452 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:14.990189 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:14.990264 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:14.990627 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:14.990691 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:15.490186 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:15.490254 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:15.490558 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:15.990404 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:15.990479 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:15.990821 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:16.490635 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:16.490717 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:16.491027 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:16.990834 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:16.990930 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:16.991327 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:16.991378 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:17.489874 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:17.489955 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:17.490319 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:17.990029 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:17.990129 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:17.990461 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:18.489952 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:18.490024 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:18.490359 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:18.989974 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:18.990051 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:18.990415 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:19.489997 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:19.490101 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:19.490461 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:19.490518 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:19.990172 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:19.990243 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:19.990549 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:20.489961 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:20.490039 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:20.490384 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:20.990305 1440600 node_ready.go:38] duration metric: took 6m0.000552396s for node "functional-973657" to be "Ready" ...
	I1222 00:28:20.993510 1440600 out.go:203] 
	W1222 00:28:20.996431 1440600 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1222 00:28:20.996456 1440600 out.go:285] * 
	W1222 00:28:20.998594 1440600 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1222 00:28:21.002257 1440600 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 22 00:22:18 functional-973657 containerd[5251]: time="2025-12-22T00:22:18.298792815Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 22 00:22:18 functional-973657 containerd[5251]: time="2025-12-22T00:22:18.298864643Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 22 00:22:18 functional-973657 containerd[5251]: time="2025-12-22T00:22:18.298978129Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 22 00:22:18 functional-973657 containerd[5251]: time="2025-12-22T00:22:18.299056694Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 22 00:22:18 functional-973657 containerd[5251]: time="2025-12-22T00:22:18.299121474Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 22 00:22:18 functional-973657 containerd[5251]: time="2025-12-22T00:22:18.299191005Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 22 00:22:18 functional-973657 containerd[5251]: time="2025-12-22T00:22:18.299269176Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 22 00:22:18 functional-973657 containerd[5251]: time="2025-12-22T00:22:18.299347667Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 22 00:22:18 functional-973657 containerd[5251]: time="2025-12-22T00:22:18.299422236Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 22 00:22:18 functional-973657 containerd[5251]: time="2025-12-22T00:22:18.299509260Z" level=info msg="Connect containerd service"
	Dec 22 00:22:18 functional-973657 containerd[5251]: time="2025-12-22T00:22:18.299903955Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 22 00:22:18 functional-973657 containerd[5251]: time="2025-12-22T00:22:18.300613138Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 22 00:22:18 functional-973657 containerd[5251]: time="2025-12-22T00:22:18.311856100Z" level=info msg="Start subscribing containerd event"
	Dec 22 00:22:18 functional-973657 containerd[5251]: time="2025-12-22T00:22:18.312075679Z" level=info msg="Start recovering state"
	Dec 22 00:22:18 functional-973657 containerd[5251]: time="2025-12-22T00:22:18.312037500Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 22 00:22:18 functional-973657 containerd[5251]: time="2025-12-22T00:22:18.312330451Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 22 00:22:18 functional-973657 containerd[5251]: time="2025-12-22T00:22:18.349120777Z" level=info msg="Start event monitor"
	Dec 22 00:22:18 functional-973657 containerd[5251]: time="2025-12-22T00:22:18.349322698Z" level=info msg="Start cni network conf syncer for default"
	Dec 22 00:22:18 functional-973657 containerd[5251]: time="2025-12-22T00:22:18.349392640Z" level=info msg="Start streaming server"
	Dec 22 00:22:18 functional-973657 containerd[5251]: time="2025-12-22T00:22:18.349460546Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 22 00:22:18 functional-973657 containerd[5251]: time="2025-12-22T00:22:18.349519738Z" level=info msg="runtime interface starting up..."
	Dec 22 00:22:18 functional-973657 containerd[5251]: time="2025-12-22T00:22:18.349574885Z" level=info msg="starting plugins..."
	Dec 22 00:22:18 functional-973657 containerd[5251]: time="2025-12-22T00:22:18.349639797Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 22 00:22:18 functional-973657 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 22 00:22:18 functional-973657 containerd[5251]: time="2025-12-22T00:22:18.351883541Z" level=info msg="containerd successfully booted in 0.079188s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:28:22.797573    8495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:28:22.798060    8495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:28:22.799690    8495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:28:22.800300    8495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:28:22.801922    8495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec21 21:36] overlayfs: idmapped layers are currently not supported
	[ +34.220408] overlayfs: idmapped layers are currently not supported
	[Dec21 21:37] overlayfs: idmapped layers are currently not supported
	[Dec21 21:38] overlayfs: idmapped layers are currently not supported
	[Dec21 21:39] overlayfs: idmapped layers are currently not supported
	[ +42.728151] overlayfs: idmapped layers are currently not supported
	[Dec21 21:40] overlayfs: idmapped layers are currently not supported
	[Dec21 21:42] overlayfs: idmapped layers are currently not supported
	[Dec21 21:43] overlayfs: idmapped layers are currently not supported
	[Dec21 22:01] overlayfs: idmapped layers are currently not supported
	[Dec21 22:03] overlayfs: idmapped layers are currently not supported
	[Dec21 22:04] overlayfs: idmapped layers are currently not supported
	[Dec21 22:06] overlayfs: idmapped layers are currently not supported
	[Dec21 22:07] overlayfs: idmapped layers are currently not supported
	[Dec21 22:09] kauditd_printk_skb: 8 callbacks suppressed
	[Dec21 22:19] FS-Cache: Duplicate cookie detected
	[  +0.000799] FS-Cache: O-cookie c=000001b7 [p=00000002 fl=222 nc=0 na=1]
	[  +0.000997] FS-Cache: O-cookie d=000000006644c6a1{9P.session} n=0000000059d48210
	[  +0.001156] FS-Cache: O-key=[10] '34333231303139373837'
	[  +0.000780] FS-Cache: N-cookie c=000001b8 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000956] FS-Cache: N-cookie d=000000006644c6a1{9P.session} n=000000007a8030ee
	[  +0.001187] FS-Cache: N-key=[10] '34333231303139373837'
	[Dec22 00:03] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 00:28:22 up 1 day,  7:10,  0 user,  load average: 0.19, 0.29, 0.87
	Linux functional-973657 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 22 00:28:19 functional-973657 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 00:28:19 functional-973657 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 807.
	Dec 22 00:28:19 functional-973657 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:28:19 functional-973657 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:28:20 functional-973657 kubelet[8378]: E1222 00:28:20.048339    8378 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 00:28:20 functional-973657 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 00:28:20 functional-973657 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 00:28:20 functional-973657 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 808.
	Dec 22 00:28:20 functional-973657 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:28:20 functional-973657 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:28:20 functional-973657 kubelet[8384]: E1222 00:28:20.783211    8384 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 00:28:20 functional-973657 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 00:28:20 functional-973657 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 00:28:21 functional-973657 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 809.
	Dec 22 00:28:21 functional-973657 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:28:21 functional-973657 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:28:21 functional-973657 kubelet[8390]: E1222 00:28:21.558452    8390 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 00:28:21 functional-973657 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 00:28:21 functional-973657 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 00:28:22 functional-973657 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 810.
	Dec 22 00:28:22 functional-973657 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:28:22 functional-973657 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:28:22 functional-973657 kubelet[8411]: E1222 00:28:22.313936    8411 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 00:28:22 functional-973657 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 00:28:22 functional-973657 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-973657 -n functional-973657
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-973657 -n functional-973657: exit status 2 (397.266513ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-973657" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart (368.09s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods (2.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-973657 get po -A
functional_test.go:711: (dbg) Non-zero exit: kubectl --context functional-973657 get po -A: exit status 1 (61.111352ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:713: failed to get kubectl pods: args "kubectl --context functional-973657 get po -A" : exit status 1
functional_test.go:717: expected stderr to be empty but got *"The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?\n"*: args "kubectl --context functional-973657 get po -A"
functional_test.go:720: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-973657 get po -A"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-973657
helpers_test.go:244: (dbg) docker inspect functional-973657:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "66803363da2c02a83814dda1c0764d3abdab5acc630ac08f6a997102221d51a1",
	        "Created": "2025-12-22T00:13:58.968084222Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1435135,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-22T00:13:59.032592154Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:065a636b8735485f57df2b02ed6532902f189a9c5dc304ae0ae68a778e1c9b2c",
	        "ResolvConfPath": "/var/lib/docker/containers/66803363da2c02a83814dda1c0764d3abdab5acc630ac08f6a997102221d51a1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/66803363da2c02a83814dda1c0764d3abdab5acc630ac08f6a997102221d51a1/hostname",
	        "HostsPath": "/var/lib/docker/containers/66803363da2c02a83814dda1c0764d3abdab5acc630ac08f6a997102221d51a1/hosts",
	        "LogPath": "/var/lib/docker/containers/66803363da2c02a83814dda1c0764d3abdab5acc630ac08f6a997102221d51a1/66803363da2c02a83814dda1c0764d3abdab5acc630ac08f6a997102221d51a1-json.log",
	        "Name": "/functional-973657",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-973657:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-973657",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "66803363da2c02a83814dda1c0764d3abdab5acc630ac08f6a997102221d51a1",
	                "LowerDir": "/var/lib/docker/overlay2/5e1ad55fc7940958673405b2a5d9d7701d300a0b94ebc0c871b8eb28331634c7-init/diff:/var/lib/docker/overlay2/fdfc0dccbb3b6766b7981a5669907f62e120bbe774767ccbee34e6115374625f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5e1ad55fc7940958673405b2a5d9d7701d300a0b94ebc0c871b8eb28331634c7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5e1ad55fc7940958673405b2a5d9d7701d300a0b94ebc0c871b8eb28331634c7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5e1ad55fc7940958673405b2a5d9d7701d300a0b94ebc0c871b8eb28331634c7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-973657",
	                "Source": "/var/lib/docker/volumes/functional-973657/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-973657",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-973657",
	                "name.minikube.sigs.k8s.io": "functional-973657",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9a7415dc7cc8c69d402ee20ae768f9dea6a8f4f19a78dc21532ae8d42f3e7899",
	            "SandboxKey": "/var/run/docker/netns/9a7415dc7cc8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38390"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38391"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38394"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38392"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38393"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-973657": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ee:06:b1:ad:4a:31",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1e42e4d66b505bfd04d5446c52717be20340997733545e1d4203ef38f80c0dbb",
	                    "EndpointID": "cfb1e4a2d5409c3c16cc85466e1253884fb8124967c81f53a3b06011e2792928",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-973657",
	                        "66803363da2c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-973657 -n functional-973657
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-973657 -n functional-973657: exit status 2 (308.345552ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p functional-973657 logs -n 25: (1.005797774s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                         ARGS                                                                          │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-722318 ssh findmnt -T /mount-9p | grep 9p                                                                                                  │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │                     │
	│ mount          │ -p functional-722318 /tmp/TestFunctionalparallelMountCmdspecific-port4022870112/001:/mount-9p --alsologtostderr -v=1 --port 45835                     │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │                     │
	│ ssh            │ functional-722318 ssh findmnt -T /mount-9p | grep 9p                                                                                                  │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ 22 Dec 25 00:13 UTC │
	│ ssh            │ functional-722318 ssh -- ls -la /mount-9p                                                                                                             │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ 22 Dec 25 00:13 UTC │
	│ ssh            │ functional-722318 ssh sudo umount -f /mount-9p                                                                                                        │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │                     │
	│ mount          │ -p functional-722318 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1841505609/001:/mount1 --alsologtostderr -v=1                                    │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │                     │
	│ mount          │ -p functional-722318 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1841505609/001:/mount2 --alsologtostderr -v=1                                    │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │                     │
	│ mount          │ -p functional-722318 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1841505609/001:/mount3 --alsologtostderr -v=1                                    │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │                     │
	│ ssh            │ functional-722318 ssh findmnt -T /mount1                                                                                                              │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ 22 Dec 25 00:13 UTC │
	│ ssh            │ functional-722318 ssh findmnt -T /mount2                                                                                                              │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ 22 Dec 25 00:13 UTC │
	│ ssh            │ functional-722318 ssh findmnt -T /mount3                                                                                                              │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ 22 Dec 25 00:13 UTC │
	│ mount          │ -p functional-722318 --kill=true                                                                                                                      │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │                     │
	│ update-context │ functional-722318 update-context --alsologtostderr -v=2                                                                                               │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ 22 Dec 25 00:13 UTC │
	│ update-context │ functional-722318 update-context --alsologtostderr -v=2                                                                                               │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ 22 Dec 25 00:13 UTC │
	│ update-context │ functional-722318 update-context --alsologtostderr -v=2                                                                                               │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ 22 Dec 25 00:13 UTC │
	│ image          │ functional-722318 image ls --format short --alsologtostderr                                                                                           │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ 22 Dec 25 00:13 UTC │
	│ image          │ functional-722318 image ls --format yaml --alsologtostderr                                                                                            │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ 22 Dec 25 00:13 UTC │
	│ ssh            │ functional-722318 ssh pgrep buildkitd                                                                                                                 │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │                     │
	│ image          │ functional-722318 image build -t localhost/my-image:functional-722318 testdata/build --alsologtostderr                                                │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ 22 Dec 25 00:13 UTC │
	│ image          │ functional-722318 image ls --format json --alsologtostderr                                                                                            │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ 22 Dec 25 00:13 UTC │
	│ image          │ functional-722318 image ls --format table --alsologtostderr                                                                                           │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ 22 Dec 25 00:13 UTC │
	│ image          │ functional-722318 image ls                                                                                                                            │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ 22 Dec 25 00:13 UTC │
	│ delete         │ -p functional-722318                                                                                                                                  │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ 22 Dec 25 00:13 UTC │
	│ start          │ -p functional-973657 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │                     │
	│ start          │ -p functional-973657 --alsologtostderr -v=8                                                                                                           │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:22 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/22 00:22:15
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1222 00:22:15.745982 1440600 out.go:360] Setting OutFile to fd 1 ...
	I1222 00:22:15.746211 1440600 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:22:15.746249 1440600 out.go:374] Setting ErrFile to fd 2...
	I1222 00:22:15.746270 1440600 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:22:15.746555 1440600 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1395000/.minikube/bin
	I1222 00:22:15.747001 1440600 out.go:368] Setting JSON to false
	I1222 00:22:15.747938 1440600 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":111889,"bootTime":1766251047,"procs":168,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1222 00:22:15.748043 1440600 start.go:143] virtualization:  
	I1222 00:22:15.753569 1440600 out.go:179] * [functional-973657] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1222 00:22:15.756598 1440600 out.go:179]   - MINIKUBE_LOCATION=22179
	I1222 00:22:15.756741 1440600 notify.go:221] Checking for updates...
	I1222 00:22:15.762722 1440600 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 00:22:15.765671 1440600 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-1395000/kubeconfig
	I1222 00:22:15.768657 1440600 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1395000/.minikube
	I1222 00:22:15.771623 1440600 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1222 00:22:15.774619 1440600 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 00:22:15.777830 1440600 config.go:182] Loaded profile config "functional-973657": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1222 00:22:15.777978 1440600 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 00:22:15.812917 1440600 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1222 00:22:15.813051 1440600 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 00:22:15.874179 1440600 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 00:22:15.864674601 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 00:22:15.874289 1440600 docker.go:319] overlay module found
	I1222 00:22:15.877302 1440600 out.go:179] * Using the docker driver based on existing profile
	I1222 00:22:15.880104 1440600 start.go:309] selected driver: docker
	I1222 00:22:15.880124 1440600 start.go:928] validating driver "docker" against &{Name:functional-973657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-973657 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableC
oreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 00:22:15.880226 1440600 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 00:22:15.880331 1440600 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 00:22:15.936346 1440600 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 00:22:15.927222796 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 00:22:15.936748 1440600 cni.go:84] Creating CNI manager for ""
	I1222 00:22:15.936818 1440600 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1222 00:22:15.936877 1440600 start.go:353] cluster config:
	{Name:functional-973657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-973657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 00:22:15.939915 1440600 out.go:179] * Starting "functional-973657" primary control-plane node in "functional-973657" cluster
	I1222 00:22:15.942690 1440600 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1222 00:22:15.945666 1440600 out.go:179] * Pulling base image v0.0.48-1766219634-22260 ...
	I1222 00:22:15.948535 1440600 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1222 00:22:15.948600 1440600 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4
	I1222 00:22:15.948615 1440600 cache.go:65] Caching tarball of preloaded images
	I1222 00:22:15.948645 1440600 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1222 00:22:15.948702 1440600 preload.go:251] Found /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1222 00:22:15.948713 1440600 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on containerd
	I1222 00:22:15.948830 1440600 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/config.json ...
	I1222 00:22:15.969249 1440600 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon, skipping pull
	I1222 00:22:15.969274 1440600 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 exists in daemon, skipping load
	I1222 00:22:15.969294 1440600 cache.go:243] Successfully downloaded all kic artifacts
	I1222 00:22:15.969326 1440600 start.go:360] acquireMachinesLock for functional-973657: {Name:mk23c5c2a3abc92310900db50002bad061b76c2b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 00:22:15.969396 1440600 start.go:364] duration metric: took 41.633µs to acquireMachinesLock for "functional-973657"
	I1222 00:22:15.969420 1440600 start.go:96] Skipping create...Using existing machine configuration
	I1222 00:22:15.969432 1440600 fix.go:54] fixHost starting: 
	I1222 00:22:15.969697 1440600 cli_runner.go:164] Run: docker container inspect functional-973657 --format={{.State.Status}}
	I1222 00:22:15.991071 1440600 fix.go:112] recreateIfNeeded on functional-973657: state=Running err=<nil>
	W1222 00:22:15.991104 1440600 fix.go:138] unexpected machine state, will restart: <nil>
	I1222 00:22:15.994289 1440600 out.go:252] * Updating the running docker "functional-973657" container ...
	I1222 00:22:15.994325 1440600 machine.go:94] provisionDockerMachine start ...
	I1222 00:22:15.994407 1440600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
	I1222 00:22:16.016696 1440600 main.go:144] libmachine: Using SSH client type: native
	I1222 00:22:16.017052 1440600 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38390 <nil> <nil>}
	I1222 00:22:16.017069 1440600 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 00:22:16.150117 1440600 main.go:144] libmachine: SSH cmd err, output: <nil>: functional-973657
	
	I1222 00:22:16.150145 1440600 ubuntu.go:182] provisioning hostname "functional-973657"
	I1222 00:22:16.150214 1440600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
	I1222 00:22:16.171110 1440600 main.go:144] libmachine: Using SSH client type: native
	I1222 00:22:16.171503 1440600 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38390 <nil> <nil>}
	I1222 00:22:16.171525 1440600 main.go:144] libmachine: About to run SSH command:
	sudo hostname functional-973657 && echo "functional-973657" | sudo tee /etc/hostname
	I1222 00:22:16.320804 1440600 main.go:144] libmachine: SSH cmd err, output: <nil>: functional-973657
	
	I1222 00:22:16.320911 1440600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
	I1222 00:22:16.341102 1440600 main.go:144] libmachine: Using SSH client type: native
	I1222 00:22:16.341468 1440600 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38390 <nil> <nil>}
	I1222 00:22:16.341492 1440600 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-973657' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-973657/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-973657' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1222 00:22:16.474666 1440600 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 00:22:16.474761 1440600 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22179-1395000/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-1395000/.minikube}
	I1222 00:22:16.474804 1440600 ubuntu.go:190] setting up certificates
	I1222 00:22:16.474823 1440600 provision.go:84] configureAuth start
	I1222 00:22:16.474894 1440600 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-973657
	I1222 00:22:16.493393 1440600 provision.go:143] copyHostCerts
	I1222 00:22:16.493439 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.pem
	I1222 00:22:16.493474 1440600 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.pem, removing ...
	I1222 00:22:16.493495 1440600 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.pem
	I1222 00:22:16.493578 1440600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.pem (1082 bytes)
	I1222 00:22:16.493680 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22179-1395000/.minikube/cert.pem
	I1222 00:22:16.493704 1440600 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1395000/.minikube/cert.pem, removing ...
	I1222 00:22:16.493715 1440600 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1395000/.minikube/cert.pem
	I1222 00:22:16.493744 1440600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-1395000/.minikube/cert.pem (1123 bytes)
	I1222 00:22:16.493808 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22179-1395000/.minikube/key.pem
	I1222 00:22:16.493831 1440600 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1395000/.minikube/key.pem, removing ...
	I1222 00:22:16.493837 1440600 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1395000/.minikube/key.pem
	I1222 00:22:16.493863 1440600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-1395000/.minikube/key.pem (1679 bytes)
	I1222 00:22:16.493929 1440600 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca-key.pem org=jenkins.functional-973657 san=[127.0.0.1 192.168.49.2 functional-973657 localhost minikube]
	I1222 00:22:16.688332 1440600 provision.go:177] copyRemoteCerts
	I1222 00:22:16.688423 1440600 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1222 00:22:16.688474 1440600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
	I1222 00:22:16.708412 1440600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38390 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/functional-973657/id_rsa Username:docker}
	I1222 00:22:16.807036 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1222 00:22:16.807104 1440600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1222 00:22:16.826203 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1222 00:22:16.826269 1440600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1222 00:22:16.844818 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1222 00:22:16.844882 1440600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1222 00:22:16.862814 1440600 provision.go:87] duration metric: took 387.965654ms to configureAuth
	I1222 00:22:16.862846 1440600 ubuntu.go:206] setting minikube options for container-runtime
	I1222 00:22:16.863040 1440600 config.go:182] Loaded profile config "functional-973657": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1222 00:22:16.863055 1440600 machine.go:97] duration metric: took 868.721817ms to provisionDockerMachine
	I1222 00:22:16.863063 1440600 start.go:293] postStartSetup for "functional-973657" (driver="docker")
	I1222 00:22:16.863075 1440600 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1222 00:22:16.863140 1440600 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1222 00:22:16.863187 1440600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
	I1222 00:22:16.881215 1440600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38390 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/functional-973657/id_rsa Username:docker}
	I1222 00:22:16.978224 1440600 ssh_runner.go:195] Run: cat /etc/os-release
	I1222 00:22:16.981674 1440600 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1222 00:22:16.981697 1440600 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1222 00:22:16.981701 1440600 command_runner.go:130] > VERSION_ID="12"
	I1222 00:22:16.981706 1440600 command_runner.go:130] > VERSION="12 (bookworm)"
	I1222 00:22:16.981711 1440600 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1222 00:22:16.981715 1440600 command_runner.go:130] > ID=debian
	I1222 00:22:16.981720 1440600 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1222 00:22:16.981726 1440600 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1222 00:22:16.981732 1440600 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1222 00:22:16.981781 1440600 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1222 00:22:16.981805 1440600 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1222 00:22:16.981817 1440600 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1395000/.minikube/addons for local assets ...
	I1222 00:22:16.981874 1440600 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1395000/.minikube/files for local assets ...
	I1222 00:22:16.981966 1440600 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem -> 13968642.pem in /etc/ssl/certs
	I1222 00:22:16.981976 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem -> /etc/ssl/certs/13968642.pem
	I1222 00:22:16.982050 1440600 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/test/nested/copy/1396864/hosts -> hosts in /etc/test/nested/copy/1396864
	I1222 00:22:16.982058 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/test/nested/copy/1396864/hosts -> /etc/test/nested/copy/1396864/hosts
	I1222 00:22:16.982135 1440600 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1396864
	I1222 00:22:16.991499 1440600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem --> /etc/ssl/certs/13968642.pem (1708 bytes)
	I1222 00:22:17.014617 1440600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/test/nested/copy/1396864/hosts --> /etc/test/nested/copy/1396864/hosts (40 bytes)
	I1222 00:22:17.034289 1440600 start.go:296] duration metric: took 171.210875ms for postStartSetup
	I1222 00:22:17.034373 1440600 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 00:22:17.034421 1440600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
	I1222 00:22:17.055784 1440600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38390 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/functional-973657/id_rsa Username:docker}
	I1222 00:22:17.151461 1440600 command_runner.go:130] > 11%
	I1222 00:22:17.151551 1440600 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1222 00:22:17.156056 1440600 command_runner.go:130] > 174G
	I1222 00:22:17.156550 1440600 fix.go:56] duration metric: took 1.187112425s for fixHost
	I1222 00:22:17.156572 1440600 start.go:83] releasing machines lock for "functional-973657", held for 1.187162091s
	I1222 00:22:17.156642 1440600 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-973657
	I1222 00:22:17.174525 1440600 ssh_runner.go:195] Run: cat /version.json
	I1222 00:22:17.174589 1440600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
	I1222 00:22:17.174652 1440600 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1222 00:22:17.174714 1440600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
	I1222 00:22:17.196471 1440600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38390 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/functional-973657/id_rsa Username:docker}
	I1222 00:22:17.199230 1440600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38390 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/functional-973657/id_rsa Username:docker}
	I1222 00:22:17.379176 1440600 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1222 00:22:17.379235 1440600 command_runner.go:130] > {"iso_version": "v1.37.0-1765965980-22186", "kicbase_version": "v0.0.48-1766219634-22260", "minikube_version": "v1.37.0", "commit": "84997fca2a3b77f8e0b5b5ebeca663f85f924cfc"}
	I1222 00:22:17.379354 1440600 ssh_runner.go:195] Run: systemctl --version
	I1222 00:22:17.385410 1440600 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1222 00:22:17.385465 1440600 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1222 00:22:17.385880 1440600 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1222 00:22:17.390276 1440600 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1222 00:22:17.390418 1440600 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1222 00:22:17.390488 1440600 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1222 00:22:17.398542 1440600 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1222 00:22:17.398570 1440600 start.go:496] detecting cgroup driver to use...
	I1222 00:22:17.398621 1440600 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 00:22:17.398692 1440600 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1222 00:22:17.414048 1440600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1222 00:22:17.427185 1440600 docker.go:218] disabling cri-docker service (if available) ...
	I1222 00:22:17.427253 1440600 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1222 00:22:17.442685 1440600 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1222 00:22:17.455696 1440600 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1222 00:22:17.577927 1440600 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1222 00:22:17.693641 1440600 docker.go:234] disabling docker service ...
	I1222 00:22:17.693740 1440600 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1222 00:22:17.714854 1440600 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1222 00:22:17.729523 1440600 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1222 00:22:17.852439 1440600 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1222 00:22:17.963077 1440600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1222 00:22:17.977041 1440600 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 00:22:17.991276 1440600 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1222 00:22:17.992369 1440600 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1222 00:22:18.003034 1440600 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1222 00:22:18.019363 1440600 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1222 00:22:18.019441 1440600 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1222 00:22:18.030259 1440600 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1222 00:22:18.041222 1440600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1222 00:22:18.051429 1440600 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1222 00:22:18.060629 1440600 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1222 00:22:18.069455 1440600 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1222 00:22:18.079294 1440600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1222 00:22:18.088607 1440600 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1222 00:22:18.097955 1440600 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1222 00:22:18.105014 1440600 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1222 00:22:18.106002 1440600 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1222 00:22:18.114147 1440600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 00:22:18.224816 1440600 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1222 00:22:18.353040 1440600 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1222 00:22:18.353118 1440600 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1222 00:22:18.356934 1440600 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I1222 00:22:18.357009 1440600 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1222 00:22:18.357030 1440600 command_runner.go:130] > Device: 0,72	Inode: 1616        Links: 1
	I1222 00:22:18.357053 1440600 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1222 00:22:18.357086 1440600 command_runner.go:130] > Access: 2025-12-22 00:22:18.309838958 +0000
	I1222 00:22:18.357111 1440600 command_runner.go:130] > Modify: 2025-12-22 00:22:18.309838958 +0000
	I1222 00:22:18.357132 1440600 command_runner.go:130] > Change: 2025-12-22 00:22:18.309838958 +0000
	I1222 00:22:18.357178 1440600 command_runner.go:130] >  Birth: -
	I1222 00:22:18.357507 1440600 start.go:564] Will wait 60s for crictl version
	I1222 00:22:18.357612 1440600 ssh_runner.go:195] Run: which crictl
	I1222 00:22:18.361021 1440600 command_runner.go:130] > /usr/local/bin/crictl
	I1222 00:22:18.361396 1440600 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1222 00:22:18.384093 1440600 command_runner.go:130] > Version:  0.1.0
	I1222 00:22:18.384169 1440600 command_runner.go:130] > RuntimeName:  containerd
	I1222 00:22:18.384205 1440600 command_runner.go:130] > RuntimeVersion:  v2.2.1
	I1222 00:22:18.384240 1440600 command_runner.go:130] > RuntimeApiVersion:  v1
	I1222 00:22:18.386573 1440600 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.1
	RuntimeApiVersion:  v1
	I1222 00:22:18.386687 1440600 ssh_runner.go:195] Run: containerd --version
	I1222 00:22:18.407693 1440600 command_runner.go:130] > containerd containerd.io v2.2.1 dea7da592f5d1d2b7755e3a161be07f43fad8f75
	I1222 00:22:18.410154 1440600 ssh_runner.go:195] Run: containerd --version
	I1222 00:22:18.429567 1440600 command_runner.go:130] > containerd containerd.io v2.2.1 dea7da592f5d1d2b7755e3a161be07f43fad8f75
	I1222 00:22:18.437868 1440600 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.1 ...
	I1222 00:22:18.440703 1440600 cli_runner.go:164] Run: docker network inspect functional-973657 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 00:22:18.457963 1440600 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1222 00:22:18.462339 1440600 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1222 00:22:18.462457 1440600 kubeadm.go:884] updating cluster {Name:functional-973657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-973657 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1222 00:22:18.462560 1440600 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1222 00:22:18.462639 1440600 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 00:22:18.493006 1440600 command_runner.go:130] > {
	I1222 00:22:18.493026 1440600 command_runner.go:130] >   "images":  [
	I1222 00:22:18.493030 1440600 command_runner.go:130] >     {
	I1222 00:22:18.493040 1440600 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1222 00:22:18.493045 1440600 command_runner.go:130] >       "repoTags":  [
	I1222 00:22:18.493051 1440600 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1222 00:22:18.493055 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.493059 1440600 command_runner.go:130] >       "repoDigests":  [
	I1222 00:22:18.493072 1440600 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1222 00:22:18.493076 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.493081 1440600 command_runner.go:130] >       "size":  "40636774",
	I1222 00:22:18.493085 1440600 command_runner.go:130] >       "username":  "",
	I1222 00:22:18.493089 1440600 command_runner.go:130] >       "pinned":  false
	I1222 00:22:18.493092 1440600 command_runner.go:130] >     },
	I1222 00:22:18.493095 1440600 command_runner.go:130] >     {
	I1222 00:22:18.493102 1440600 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1222 00:22:18.493106 1440600 command_runner.go:130] >       "repoTags":  [
	I1222 00:22:18.493112 1440600 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1222 00:22:18.493116 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.493120 1440600 command_runner.go:130] >       "repoDigests":  [
	I1222 00:22:18.493128 1440600 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1222 00:22:18.493135 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.493139 1440600 command_runner.go:130] >       "size":  "8034419",
	I1222 00:22:18.493143 1440600 command_runner.go:130] >       "username":  "",
	I1222 00:22:18.493147 1440600 command_runner.go:130] >       "pinned":  false
	I1222 00:22:18.493150 1440600 command_runner.go:130] >     },
	I1222 00:22:18.493153 1440600 command_runner.go:130] >     {
	I1222 00:22:18.493162 1440600 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1222 00:22:18.493166 1440600 command_runner.go:130] >       "repoTags":  [
	I1222 00:22:18.493171 1440600 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1222 00:22:18.493178 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.493186 1440600 command_runner.go:130] >       "repoDigests":  [
	I1222 00:22:18.493194 1440600 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1222 00:22:18.493197 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.493201 1440600 command_runner.go:130] >       "size":  "21168808",
	I1222 00:22:18.493206 1440600 command_runner.go:130] >       "username":  "nonroot",
	I1222 00:22:18.493210 1440600 command_runner.go:130] >       "pinned":  false
	I1222 00:22:18.493213 1440600 command_runner.go:130] >     },
	I1222 00:22:18.493216 1440600 command_runner.go:130] >     {
	I1222 00:22:18.493223 1440600 command_runner.go:130] >       "id":  "sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57",
	I1222 00:22:18.493227 1440600 command_runner.go:130] >       "repoTags":  [
	I1222 00:22:18.493231 1440600 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.6-0"
	I1222 00:22:18.493235 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.493238 1440600 command_runner.go:130] >       "repoDigests":  [
	I1222 00:22:18.493246 1440600 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890"
	I1222 00:22:18.493249 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.493253 1440600 command_runner.go:130] >       "size":  "21749640",
	I1222 00:22:18.493258 1440600 command_runner.go:130] >       "uid":  {
	I1222 00:22:18.493261 1440600 command_runner.go:130] >         "value":  "0"
	I1222 00:22:18.493264 1440600 command_runner.go:130] >       },
	I1222 00:22:18.493268 1440600 command_runner.go:130] >       "username":  "",
	I1222 00:22:18.493271 1440600 command_runner.go:130] >       "pinned":  false
	I1222 00:22:18.493275 1440600 command_runner.go:130] >     },
	I1222 00:22:18.493278 1440600 command_runner.go:130] >     {
	I1222 00:22:18.493285 1440600 command_runner.go:130] >       "id":  "sha256:3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54",
	I1222 00:22:18.493289 1440600 command_runner.go:130] >       "repoTags":  [
	I1222 00:22:18.493294 1440600 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-rc.1"
	I1222 00:22:18.493297 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.493300 1440600 command_runner.go:130] >       "repoDigests":  [
	I1222 00:22:18.493308 1440600 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:58367b5c0428495c0c12411fa7a018f5d40fe57307b85d8935b1ed35706ff7ee"
	I1222 00:22:18.493311 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.493316 1440600 command_runner.go:130] >       "size":  "24692223",
	I1222 00:22:18.493319 1440600 command_runner.go:130] >       "uid":  {
	I1222 00:22:18.493335 1440600 command_runner.go:130] >         "value":  "0"
	I1222 00:22:18.493338 1440600 command_runner.go:130] >       },
	I1222 00:22:18.493342 1440600 command_runner.go:130] >       "username":  "",
	I1222 00:22:18.493346 1440600 command_runner.go:130] >       "pinned":  false
	I1222 00:22:18.493349 1440600 command_runner.go:130] >     },
	I1222 00:22:18.493352 1440600 command_runner.go:130] >     {
	I1222 00:22:18.493359 1440600 command_runner.go:130] >       "id":  "sha256:a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a",
	I1222 00:22:18.493362 1440600 command_runner.go:130] >       "repoTags":  [
	I1222 00:22:18.493368 1440600 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1"
	I1222 00:22:18.493371 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.493374 1440600 command_runner.go:130] >       "repoDigests":  [
	I1222 00:22:18.493383 1440600 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:57ab0f75f58d99f4be7bff7bdda015fcbf1b7c20e58ba2722c8c39f751dc8c98"
	I1222 00:22:18.493386 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.493389 1440600 command_runner.go:130] >       "size":  "20672157",
	I1222 00:22:18.493393 1440600 command_runner.go:130] >       "uid":  {
	I1222 00:22:18.493396 1440600 command_runner.go:130] >         "value":  "0"
	I1222 00:22:18.493399 1440600 command_runner.go:130] >       },
	I1222 00:22:18.493403 1440600 command_runner.go:130] >       "username":  "",
	I1222 00:22:18.493407 1440600 command_runner.go:130] >       "pinned":  false
	I1222 00:22:18.493410 1440600 command_runner.go:130] >     },
	I1222 00:22:18.493413 1440600 command_runner.go:130] >     {
	I1222 00:22:18.493420 1440600 command_runner.go:130] >       "id":  "sha256:7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e",
	I1222 00:22:18.493423 1440600 command_runner.go:130] >       "repoTags":  [
	I1222 00:22:18.493429 1440600 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-rc.1"
	I1222 00:22:18.493432 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.493435 1440600 command_runner.go:130] >       "repoDigests":  [
	I1222 00:22:18.493443 1440600 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bdd1fa8b53558a2e1967379a36b085c93faf15581e5fa9f212baf679d89c5bb5"
	I1222 00:22:18.493446 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.493450 1440600 command_runner.go:130] >       "size":  "22432301",
	I1222 00:22:18.493454 1440600 command_runner.go:130] >       "username":  "",
	I1222 00:22:18.493457 1440600 command_runner.go:130] >       "pinned":  false
	I1222 00:22:18.493460 1440600 command_runner.go:130] >     },
	I1222 00:22:18.493464 1440600 command_runner.go:130] >     {
	I1222 00:22:18.493475 1440600 command_runner.go:130] >       "id":  "sha256:abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde",
	I1222 00:22:18.493479 1440600 command_runner.go:130] >       "repoTags":  [
	I1222 00:22:18.493484 1440600 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-rc.1"
	I1222 00:22:18.493487 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.493491 1440600 command_runner.go:130] >       "repoDigests":  [
	I1222 00:22:18.493498 1440600 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8155e3db27c7081abfc8eb5da70820cfeaf0bba7449e45360e8220e670f417d3"
	I1222 00:22:18.493501 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.493505 1440600 command_runner.go:130] >       "size":  "15405535",
	I1222 00:22:18.493509 1440600 command_runner.go:130] >       "uid":  {
	I1222 00:22:18.493512 1440600 command_runner.go:130] >         "value":  "0"
	I1222 00:22:18.493516 1440600 command_runner.go:130] >       },
	I1222 00:22:18.493519 1440600 command_runner.go:130] >       "username":  "",
	I1222 00:22:18.493523 1440600 command_runner.go:130] >       "pinned":  false
	I1222 00:22:18.493526 1440600 command_runner.go:130] >     },
	I1222 00:22:18.493529 1440600 command_runner.go:130] >     {
	I1222 00:22:18.493536 1440600 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1222 00:22:18.493539 1440600 command_runner.go:130] >       "repoTags":  [
	I1222 00:22:18.493543 1440600 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1222 00:22:18.493547 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.493550 1440600 command_runner.go:130] >       "repoDigests":  [
	I1222 00:22:18.493557 1440600 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1222 00:22:18.493560 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.493564 1440600 command_runner.go:130] >       "size":  "267939",
	I1222 00:22:18.493568 1440600 command_runner.go:130] >       "uid":  {
	I1222 00:22:18.493571 1440600 command_runner.go:130] >         "value":  "65535"
	I1222 00:22:18.493575 1440600 command_runner.go:130] >       },
	I1222 00:22:18.493579 1440600 command_runner.go:130] >       "username":  "",
	I1222 00:22:18.493582 1440600 command_runner.go:130] >       "pinned":  true
	I1222 00:22:18.493585 1440600 command_runner.go:130] >     }
	I1222 00:22:18.493588 1440600 command_runner.go:130] >   ]
	I1222 00:22:18.493591 1440600 command_runner.go:130] > }
	I1222 00:22:18.493746 1440600 containerd.go:627] all images are preloaded for containerd runtime.
	I1222 00:22:18.493754 1440600 containerd.go:534] Images already preloaded, skipping extraction
	I1222 00:22:18.493814 1440600 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 00:22:18.517780 1440600 command_runner.go:130] > {
	I1222 00:22:18.517799 1440600 command_runner.go:130] >   "images":  [
	I1222 00:22:18.517803 1440600 command_runner.go:130] >     {
	I1222 00:22:18.517813 1440600 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1222 00:22:18.517818 1440600 command_runner.go:130] >       "repoTags":  [
	I1222 00:22:18.517824 1440600 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1222 00:22:18.517827 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.517831 1440600 command_runner.go:130] >       "repoDigests":  [
	I1222 00:22:18.517839 1440600 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1222 00:22:18.517843 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.517856 1440600 command_runner.go:130] >       "size":  "40636774",
	I1222 00:22:18.517861 1440600 command_runner.go:130] >       "username":  "",
	I1222 00:22:18.517865 1440600 command_runner.go:130] >       "pinned":  false
	I1222 00:22:18.517867 1440600 command_runner.go:130] >     },
	I1222 00:22:18.517870 1440600 command_runner.go:130] >     {
	I1222 00:22:18.517878 1440600 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1222 00:22:18.517882 1440600 command_runner.go:130] >       "repoTags":  [
	I1222 00:22:18.517887 1440600 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1222 00:22:18.517890 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.517894 1440600 command_runner.go:130] >       "repoDigests":  [
	I1222 00:22:18.517902 1440600 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1222 00:22:18.517906 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.517910 1440600 command_runner.go:130] >       "size":  "8034419",
	I1222 00:22:18.517913 1440600 command_runner.go:130] >       "username":  "",
	I1222 00:22:18.517917 1440600 command_runner.go:130] >       "pinned":  false
	I1222 00:22:18.517920 1440600 command_runner.go:130] >     },
	I1222 00:22:18.517923 1440600 command_runner.go:130] >     {
	I1222 00:22:18.517930 1440600 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1222 00:22:18.517934 1440600 command_runner.go:130] >       "repoTags":  [
	I1222 00:22:18.517939 1440600 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1222 00:22:18.517942 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.517947 1440600 command_runner.go:130] >       "repoDigests":  [
	I1222 00:22:18.517955 1440600 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1222 00:22:18.517958 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.517962 1440600 command_runner.go:130] >       "size":  "21168808",
	I1222 00:22:18.517966 1440600 command_runner.go:130] >       "username":  "nonroot",
	I1222 00:22:18.517970 1440600 command_runner.go:130] >       "pinned":  false
	I1222 00:22:18.517974 1440600 command_runner.go:130] >     },
	I1222 00:22:18.517977 1440600 command_runner.go:130] >     {
	I1222 00:22:18.517983 1440600 command_runner.go:130] >       "id":  "sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57",
	I1222 00:22:18.517987 1440600 command_runner.go:130] >       "repoTags":  [
	I1222 00:22:18.517992 1440600 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.6-0"
	I1222 00:22:18.517995 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.518002 1440600 command_runner.go:130] >       "repoDigests":  [
	I1222 00:22:18.518010 1440600 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890"
	I1222 00:22:18.518013 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.518017 1440600 command_runner.go:130] >       "size":  "21749640",
	I1222 00:22:18.518022 1440600 command_runner.go:130] >       "uid":  {
	I1222 00:22:18.518026 1440600 command_runner.go:130] >         "value":  "0"
	I1222 00:22:18.518029 1440600 command_runner.go:130] >       },
	I1222 00:22:18.518033 1440600 command_runner.go:130] >       "username":  "",
	I1222 00:22:18.518037 1440600 command_runner.go:130] >       "pinned":  false
	I1222 00:22:18.518041 1440600 command_runner.go:130] >     },
	I1222 00:22:18.518043 1440600 command_runner.go:130] >     {
	I1222 00:22:18.518050 1440600 command_runner.go:130] >       "id":  "sha256:3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54",
	I1222 00:22:18.518054 1440600 command_runner.go:130] >       "repoTags":  [
	I1222 00:22:18.518059 1440600 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-rc.1"
	I1222 00:22:18.518062 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.518066 1440600 command_runner.go:130] >       "repoDigests":  [
	I1222 00:22:18.518073 1440600 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:58367b5c0428495c0c12411fa7a018f5d40fe57307b85d8935b1ed35706ff7ee"
	I1222 00:22:18.518098 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.518103 1440600 command_runner.go:130] >       "size":  "24692223",
	I1222 00:22:18.518106 1440600 command_runner.go:130] >       "uid":  {
	I1222 00:22:18.518115 1440600 command_runner.go:130] >         "value":  "0"
	I1222 00:22:18.518118 1440600 command_runner.go:130] >       },
	I1222 00:22:18.518122 1440600 command_runner.go:130] >       "username":  "",
	I1222 00:22:18.518125 1440600 command_runner.go:130] >       "pinned":  false
	I1222 00:22:18.518128 1440600 command_runner.go:130] >     },
	I1222 00:22:18.518131 1440600 command_runner.go:130] >     {
	I1222 00:22:18.518142 1440600 command_runner.go:130] >       "id":  "sha256:a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a",
	I1222 00:22:18.518146 1440600 command_runner.go:130] >       "repoTags":  [
	I1222 00:22:18.518151 1440600 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1"
	I1222 00:22:18.518155 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.518158 1440600 command_runner.go:130] >       "repoDigests":  [
	I1222 00:22:18.518166 1440600 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:57ab0f75f58d99f4be7bff7bdda015fcbf1b7c20e58ba2722c8c39f751dc8c98"
	I1222 00:22:18.518170 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.518178 1440600 command_runner.go:130] >       "size":  "20672157",
	I1222 00:22:18.518182 1440600 command_runner.go:130] >       "uid":  {
	I1222 00:22:18.518185 1440600 command_runner.go:130] >         "value":  "0"
	I1222 00:22:18.518188 1440600 command_runner.go:130] >       },
	I1222 00:22:18.518192 1440600 command_runner.go:130] >       "username":  "",
	I1222 00:22:18.518195 1440600 command_runner.go:130] >       "pinned":  false
	I1222 00:22:18.518198 1440600 command_runner.go:130] >     },
	I1222 00:22:18.518202 1440600 command_runner.go:130] >     {
	I1222 00:22:18.518209 1440600 command_runner.go:130] >       "id":  "sha256:7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e",
	I1222 00:22:18.518212 1440600 command_runner.go:130] >       "repoTags":  [
	I1222 00:22:18.518217 1440600 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-rc.1"
	I1222 00:22:18.518220 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.518224 1440600 command_runner.go:130] >       "repoDigests":  [
	I1222 00:22:18.518231 1440600 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bdd1fa8b53558a2e1967379a36b085c93faf15581e5fa9f212baf679d89c5bb5"
	I1222 00:22:18.518235 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.518239 1440600 command_runner.go:130] >       "size":  "22432301",
	I1222 00:22:18.518242 1440600 command_runner.go:130] >       "username":  "",
	I1222 00:22:18.518246 1440600 command_runner.go:130] >       "pinned":  false
	I1222 00:22:18.518249 1440600 command_runner.go:130] >     },
	I1222 00:22:18.518253 1440600 command_runner.go:130] >     {
	I1222 00:22:18.518260 1440600 command_runner.go:130] >       "id":  "sha256:abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde",
	I1222 00:22:18.518264 1440600 command_runner.go:130] >       "repoTags":  [
	I1222 00:22:18.518269 1440600 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-rc.1"
	I1222 00:22:18.518273 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.518277 1440600 command_runner.go:130] >       "repoDigests":  [
	I1222 00:22:18.518285 1440600 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8155e3db27c7081abfc8eb5da70820cfeaf0bba7449e45360e8220e670f417d3"
	I1222 00:22:18.518288 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.518292 1440600 command_runner.go:130] >       "size":  "15405535",
	I1222 00:22:18.518295 1440600 command_runner.go:130] >       "uid":  {
	I1222 00:22:18.518299 1440600 command_runner.go:130] >         "value":  "0"
	I1222 00:22:18.518302 1440600 command_runner.go:130] >       },
	I1222 00:22:18.518306 1440600 command_runner.go:130] >       "username":  "",
	I1222 00:22:18.518310 1440600 command_runner.go:130] >       "pinned":  false
	I1222 00:22:18.518318 1440600 command_runner.go:130] >     },
	I1222 00:22:18.518322 1440600 command_runner.go:130] >     {
	I1222 00:22:18.518328 1440600 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1222 00:22:18.518332 1440600 command_runner.go:130] >       "repoTags":  [
	I1222 00:22:18.518337 1440600 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1222 00:22:18.518340 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.518344 1440600 command_runner.go:130] >       "repoDigests":  [
	I1222 00:22:18.518352 1440600 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1222 00:22:18.518355 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.518358 1440600 command_runner.go:130] >       "size":  "267939",
	I1222 00:22:18.518362 1440600 command_runner.go:130] >       "uid":  {
	I1222 00:22:18.518366 1440600 command_runner.go:130] >         "value":  "65535"
	I1222 00:22:18.518371 1440600 command_runner.go:130] >       },
	I1222 00:22:18.518375 1440600 command_runner.go:130] >       "username":  "",
	I1222 00:22:18.518379 1440600 command_runner.go:130] >       "pinned":  true
	I1222 00:22:18.518388 1440600 command_runner.go:130] >     }
	I1222 00:22:18.518391 1440600 command_runner.go:130] >   ]
	I1222 00:22:18.518397 1440600 command_runner.go:130] > }
	I1222 00:22:18.524524 1440600 containerd.go:627] all images are preloaded for containerd runtime.
	I1222 00:22:18.524599 1440600 cache_images.go:86] Images are preloaded, skipping loading
	I1222 00:22:18.524620 1440600 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 containerd true true} ...
	I1222 00:22:18.524759 1440600 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-973657 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-973657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1222 00:22:18.524857 1440600 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
	I1222 00:22:18.549454 1440600 command_runner.go:130] > {
	I1222 00:22:18.549479 1440600 command_runner.go:130] >   "cniconfig": {
	I1222 00:22:18.549486 1440600 command_runner.go:130] >     "Networks": [
	I1222 00:22:18.549489 1440600 command_runner.go:130] >       {
	I1222 00:22:18.549495 1440600 command_runner.go:130] >         "Config": {
	I1222 00:22:18.549500 1440600 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I1222 00:22:18.549519 1440600 command_runner.go:130] >           "Name": "cni-loopback",
	I1222 00:22:18.549527 1440600 command_runner.go:130] >           "Plugins": [
	I1222 00:22:18.549530 1440600 command_runner.go:130] >             {
	I1222 00:22:18.549541 1440600 command_runner.go:130] >               "Network": {
	I1222 00:22:18.549546 1440600 command_runner.go:130] >                 "ipam": {},
	I1222 00:22:18.549551 1440600 command_runner.go:130] >                 "type": "loopback"
	I1222 00:22:18.549560 1440600 command_runner.go:130] >               },
	I1222 00:22:18.549566 1440600 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I1222 00:22:18.549570 1440600 command_runner.go:130] >             }
	I1222 00:22:18.549579 1440600 command_runner.go:130] >           ],
	I1222 00:22:18.549590 1440600 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I1222 00:22:18.549604 1440600 command_runner.go:130] >         },
	I1222 00:22:18.549612 1440600 command_runner.go:130] >         "IFName": "lo"
	I1222 00:22:18.549615 1440600 command_runner.go:130] >       }
	I1222 00:22:18.549619 1440600 command_runner.go:130] >     ],
	I1222 00:22:18.549626 1440600 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I1222 00:22:18.549636 1440600 command_runner.go:130] >     "PluginDirs": [
	I1222 00:22:18.549640 1440600 command_runner.go:130] >       "/opt/cni/bin"
	I1222 00:22:18.549643 1440600 command_runner.go:130] >     ],
	I1222 00:22:18.549648 1440600 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I1222 00:22:18.549656 1440600 command_runner.go:130] >     "Prefix": "eth"
	I1222 00:22:18.549667 1440600 command_runner.go:130] >   },
	I1222 00:22:18.549674 1440600 command_runner.go:130] >   "config": {
	I1222 00:22:18.549678 1440600 command_runner.go:130] >     "cdiSpecDirs": [
	I1222 00:22:18.549682 1440600 command_runner.go:130] >       "/etc/cdi",
	I1222 00:22:18.549687 1440600 command_runner.go:130] >       "/var/run/cdi"
	I1222 00:22:18.549691 1440600 command_runner.go:130] >     ],
	I1222 00:22:18.549695 1440600 command_runner.go:130] >     "cni": {
	I1222 00:22:18.549698 1440600 command_runner.go:130] >       "binDir": "",
	I1222 00:22:18.549702 1440600 command_runner.go:130] >       "binDirs": [
	I1222 00:22:18.549706 1440600 command_runner.go:130] >         "/opt/cni/bin"
	I1222 00:22:18.549709 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.549713 1440600 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I1222 00:22:18.549717 1440600 command_runner.go:130] >       "confTemplate": "",
	I1222 00:22:18.549720 1440600 command_runner.go:130] >       "ipPref": "",
	I1222 00:22:18.549728 1440600 command_runner.go:130] >       "maxConfNum": 1,
	I1222 00:22:18.549732 1440600 command_runner.go:130] >       "setupSerially": false,
	I1222 00:22:18.549739 1440600 command_runner.go:130] >       "useInternalLoopback": false
	I1222 00:22:18.549748 1440600 command_runner.go:130] >     },
	I1222 00:22:18.549754 1440600 command_runner.go:130] >     "containerd": {
	I1222 00:22:18.549759 1440600 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I1222 00:22:18.549768 1440600 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I1222 00:22:18.549773 1440600 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false,
	I1222 00:22:18.549777 1440600 command_runner.go:130] >       "runtimes": {
	I1222 00:22:18.549781 1440600 command_runner.go:130] >         "runc": {
	I1222 00:22:18.549786 1440600 command_runner.go:130] >           "ContainerAnnotations": null,
	I1222 00:22:18.549795 1440600 command_runner.go:130] >           "PodAnnotations": null,
	I1222 00:22:18.549799 1440600 command_runner.go:130] >           "baseRuntimeSpec": "",
	I1222 00:22:18.549803 1440600 command_runner.go:130] >           "cgroupWritable": false,
	I1222 00:22:18.549808 1440600 command_runner.go:130] >           "cniConfDir": "",
	I1222 00:22:18.549816 1440600 command_runner.go:130] >           "cniMaxConfNum": 0,
	I1222 00:22:18.549825 1440600 command_runner.go:130] >           "io_type": "",
	I1222 00:22:18.549829 1440600 command_runner.go:130] >           "options": {
	I1222 00:22:18.549834 1440600 command_runner.go:130] >             "BinaryName": "",
	I1222 00:22:18.549841 1440600 command_runner.go:130] >             "CriuImagePath": "",
	I1222 00:22:18.549847 1440600 command_runner.go:130] >             "CriuWorkPath": "",
	I1222 00:22:18.549851 1440600 command_runner.go:130] >             "IoGid": 0,
	I1222 00:22:18.549860 1440600 command_runner.go:130] >             "IoUid": 0,
	I1222 00:22:18.549864 1440600 command_runner.go:130] >             "NoNewKeyring": false,
	I1222 00:22:18.549869 1440600 command_runner.go:130] >             "Root": "",
	I1222 00:22:18.549874 1440600 command_runner.go:130] >             "ShimCgroup": "",
	I1222 00:22:18.549883 1440600 command_runner.go:130] >             "SystemdCgroup": false
	I1222 00:22:18.549890 1440600 command_runner.go:130] >           },
	I1222 00:22:18.549896 1440600 command_runner.go:130] >           "privileged_without_host_devices": false,
	I1222 00:22:18.549907 1440600 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I1222 00:22:18.549911 1440600 command_runner.go:130] >           "runtimePath": "",
	I1222 00:22:18.549916 1440600 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I1222 00:22:18.549920 1440600 command_runner.go:130] >           "sandboxer": "podsandbox",
	I1222 00:22:18.549924 1440600 command_runner.go:130] >           "snapshotter": ""
	I1222 00:22:18.549928 1440600 command_runner.go:130] >         }
	I1222 00:22:18.549931 1440600 command_runner.go:130] >       }
	I1222 00:22:18.549934 1440600 command_runner.go:130] >     },
	I1222 00:22:18.549944 1440600 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I1222 00:22:18.549953 1440600 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I1222 00:22:18.549961 1440600 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I1222 00:22:18.549965 1440600 command_runner.go:130] >     "disableApparmor": false,
	I1222 00:22:18.549970 1440600 command_runner.go:130] >     "disableHugetlbController": true,
	I1222 00:22:18.549978 1440600 command_runner.go:130] >     "disableProcMount": false,
	I1222 00:22:18.549983 1440600 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I1222 00:22:18.549987 1440600 command_runner.go:130] >     "enableCDI": true,
	I1222 00:22:18.549991 1440600 command_runner.go:130] >     "enableSelinux": false,
	I1222 00:22:18.549996 1440600 command_runner.go:130] >     "enableUnprivilegedICMP": true,
	I1222 00:22:18.550004 1440600 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I1222 00:22:18.550010 1440600 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I1222 00:22:18.550015 1440600 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I1222 00:22:18.550019 1440600 command_runner.go:130] >     "maxContainerLogLineSize": 16384,
	I1222 00:22:18.550024 1440600 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I1222 00:22:18.550035 1440600 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I1222 00:22:18.550046 1440600 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I1222 00:22:18.550051 1440600 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I1222 00:22:18.550059 1440600 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri",
	I1222 00:22:18.550068 1440600 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I1222 00:22:18.550072 1440600 command_runner.go:130] >     "unsetSeccompProfile": ""
	I1222 00:22:18.550165 1440600 command_runner.go:130] >   },
	I1222 00:22:18.550176 1440600 command_runner.go:130] >   "features": {
	I1222 00:22:18.550180 1440600 command_runner.go:130] >     "supplemental_groups_policy": true
	I1222 00:22:18.550184 1440600 command_runner.go:130] >   },
	I1222 00:22:18.550188 1440600 command_runner.go:130] >   "golang": "go1.24.11",
	I1222 00:22:18.550201 1440600 command_runner.go:130] >   "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1222 00:22:18.550222 1440600 command_runner.go:130] >   "lastCNILoadStatus.default": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1222 00:22:18.550231 1440600 command_runner.go:130] >   "runtimeHandlers": [
	I1222 00:22:18.550234 1440600 command_runner.go:130] >     {
	I1222 00:22:18.550238 1440600 command_runner.go:130] >       "features": {
	I1222 00:22:18.550243 1440600 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1222 00:22:18.550253 1440600 command_runner.go:130] >         "user_namespaces": true
	I1222 00:22:18.550257 1440600 command_runner.go:130] >       }
	I1222 00:22:18.550260 1440600 command_runner.go:130] >     },
	I1222 00:22:18.550264 1440600 command_runner.go:130] >     {
	I1222 00:22:18.550268 1440600 command_runner.go:130] >       "features": {
	I1222 00:22:18.550272 1440600 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1222 00:22:18.550277 1440600 command_runner.go:130] >         "user_namespaces": true
	I1222 00:22:18.550282 1440600 command_runner.go:130] >       },
	I1222 00:22:18.550286 1440600 command_runner.go:130] >       "name": "runc"
	I1222 00:22:18.550290 1440600 command_runner.go:130] >     }
	I1222 00:22:18.550293 1440600 command_runner.go:130] >   ],
	I1222 00:22:18.550296 1440600 command_runner.go:130] >   "status": {
	I1222 00:22:18.550302 1440600 command_runner.go:130] >     "conditions": [
	I1222 00:22:18.550305 1440600 command_runner.go:130] >       {
	I1222 00:22:18.550315 1440600 command_runner.go:130] >         "message": "",
	I1222 00:22:18.550319 1440600 command_runner.go:130] >         "reason": "",
	I1222 00:22:18.550327 1440600 command_runner.go:130] >         "status": true,
	I1222 00:22:18.550337 1440600 command_runner.go:130] >         "type": "RuntimeReady"
	I1222 00:22:18.550341 1440600 command_runner.go:130] >       },
	I1222 00:22:18.550344 1440600 command_runner.go:130] >       {
	I1222 00:22:18.550352 1440600 command_runner.go:130] >         "message": "Network plugin returns error: cni plugin not initialized",
	I1222 00:22:18.550360 1440600 command_runner.go:130] >         "reason": "NetworkPluginNotReady",
	I1222 00:22:18.550365 1440600 command_runner.go:130] >         "status": false,
	I1222 00:22:18.550369 1440600 command_runner.go:130] >         "type": "NetworkReady"
	I1222 00:22:18.550373 1440600 command_runner.go:130] >       },
	I1222 00:22:18.550375 1440600 command_runner.go:130] >       {
	I1222 00:22:18.550400 1440600 command_runner.go:130] >         "message": "{\"io.containerd.deprecation/cgroup-v1\":\"The support for cgroup v1 is deprecated since containerd v2.2 and will be removed by no later than May 2029. Upgrade the host to use cgroup v2.\"}",
	I1222 00:22:18.550411 1440600 command_runner.go:130] >         "reason": "ContainerdHasDeprecationWarnings",
	I1222 00:22:18.550417 1440600 command_runner.go:130] >         "status": false,
	I1222 00:22:18.550423 1440600 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings"
	I1222 00:22:18.550427 1440600 command_runner.go:130] >       }
	I1222 00:22:18.550430 1440600 command_runner.go:130] >     ]
	I1222 00:22:18.550433 1440600 command_runner.go:130] >   }
	I1222 00:22:18.550437 1440600 command_runner.go:130] > }
	I1222 00:22:18.553215 1440600 cni.go:84] Creating CNI manager for ""
	I1222 00:22:18.553243 1440600 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1222 00:22:18.553264 1440600 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1222 00:22:18.553287 1440600 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-973657 NodeName:functional-973657 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1222 00:22:18.553412 1440600 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-973657"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1222 00:22:18.553487 1440600 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1222 00:22:18.562348 1440600 command_runner.go:130] > kubeadm
	I1222 00:22:18.562392 1440600 command_runner.go:130] > kubectl
	I1222 00:22:18.562397 1440600 command_runner.go:130] > kubelet
	I1222 00:22:18.563648 1440600 binaries.go:51] Found k8s binaries, skipping transfer
	I1222 00:22:18.563729 1440600 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1222 00:22:18.571505 1440600 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1222 00:22:18.584676 1440600 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1222 00:22:18.597236 1440600 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1222 00:22:18.610841 1440600 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1222 00:22:18.614244 1440600 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1222 00:22:18.614541 1440600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 00:22:18.726610 1440600 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 00:22:19.239399 1440600 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657 for IP: 192.168.49.2
	I1222 00:22:19.239420 1440600 certs.go:195] generating shared ca certs ...
	I1222 00:22:19.239437 1440600 certs.go:227] acquiring lock for ca certs: {Name:mk4e1172e73c8d9b926824a39d7e920772302ed7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:22:19.239601 1440600 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.key
	I1222 00:22:19.239659 1440600 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.key
	I1222 00:22:19.239667 1440600 certs.go:257] generating profile certs ...
	I1222 00:22:19.239794 1440600 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/client.key
	I1222 00:22:19.239853 1440600 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/apiserver.key.ec70d081
	I1222 00:22:19.239904 1440600 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/proxy-client.key
	I1222 00:22:19.239913 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1222 00:22:19.239940 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1222 00:22:19.239954 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1222 00:22:19.239964 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1222 00:22:19.239974 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1222 00:22:19.239986 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1222 00:22:19.239996 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1222 00:22:19.240015 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1222 00:22:19.240069 1440600 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/1396864.pem (1338 bytes)
	W1222 00:22:19.240100 1440600 certs.go:480] ignoring /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/1396864_empty.pem, impossibly tiny 0 bytes
	I1222 00:22:19.240108 1440600 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca-key.pem (1675 bytes)
	I1222 00:22:19.240138 1440600 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem (1082 bytes)
	I1222 00:22:19.240165 1440600 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem (1123 bytes)
	I1222 00:22:19.240227 1440600 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/key.pem (1679 bytes)
	I1222 00:22:19.240279 1440600 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem (1708 bytes)
	I1222 00:22:19.240316 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem -> /usr/share/ca-certificates/13968642.pem
	I1222 00:22:19.240338 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:22:19.240354 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/1396864.pem -> /usr/share/ca-certificates/1396864.pem
	I1222 00:22:19.240935 1440600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1222 00:22:19.264800 1440600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1222 00:22:19.285797 1440600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1222 00:22:19.306670 1440600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1222 00:22:19.326432 1440600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1222 00:22:19.345177 1440600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1222 00:22:19.365354 1440600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1222 00:22:19.385285 1440600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1222 00:22:19.406674 1440600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem --> /usr/share/ca-certificates/13968642.pem (1708 bytes)
	I1222 00:22:19.425094 1440600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1222 00:22:19.443464 1440600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/1396864.pem --> /usr/share/ca-certificates/1396864.pem (1338 bytes)
	I1222 00:22:19.461417 1440600 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1222 00:22:19.474356 1440600 ssh_runner.go:195] Run: openssl version
	I1222 00:22:19.480426 1440600 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1222 00:22:19.480764 1440600 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/13968642.pem
	I1222 00:22:19.488508 1440600 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/13968642.pem /etc/ssl/certs/13968642.pem
	I1222 00:22:19.496491 1440600 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13968642.pem
	I1222 00:22:19.500580 1440600 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 22 00:13 /usr/share/ca-certificates/13968642.pem
	I1222 00:22:19.500632 1440600 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 00:13 /usr/share/ca-certificates/13968642.pem
	I1222 00:22:19.500692 1440600 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13968642.pem
	I1222 00:22:19.542795 1440600 command_runner.go:130] > 3ec20f2e
	I1222 00:22:19.543311 1440600 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1222 00:22:19.550778 1440600 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:22:19.558196 1440600 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1222 00:22:19.566111 1440600 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:22:19.570217 1440600 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 22 00:04 /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:22:19.570294 1440600 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 00:04 /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:22:19.570384 1440600 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:22:19.611673 1440600 command_runner.go:130] > b5213941
	I1222 00:22:19.612225 1440600 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1222 00:22:19.620704 1440600 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1396864.pem
	I1222 00:22:19.628264 1440600 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1396864.pem /etc/ssl/certs/1396864.pem
	I1222 00:22:19.635997 1440600 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1396864.pem
	I1222 00:22:19.639846 1440600 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 22 00:13 /usr/share/ca-certificates/1396864.pem
	I1222 00:22:19.640210 1440600 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 00:13 /usr/share/ca-certificates/1396864.pem
	I1222 00:22:19.640329 1440600 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1396864.pem
	I1222 00:22:19.681144 1440600 command_runner.go:130] > 51391683
	I1222 00:22:19.681670 1440600 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1222 00:22:19.689290 1440600 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 00:22:19.693035 1440600 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 00:22:19.693063 1440600 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1222 00:22:19.693070 1440600 command_runner.go:130] > Device: 259,1	Inode: 3898609     Links: 1
	I1222 00:22:19.693078 1440600 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1222 00:22:19.693115 1440600 command_runner.go:130] > Access: 2025-12-22 00:18:12.483760857 +0000
	I1222 00:22:19.693127 1440600 command_runner.go:130] > Modify: 2025-12-22 00:14:07.469855514 +0000
	I1222 00:22:19.693132 1440600 command_runner.go:130] > Change: 2025-12-22 00:14:07.469855514 +0000
	I1222 00:22:19.693137 1440600 command_runner.go:130] >  Birth: 2025-12-22 00:14:07.469855514 +0000
	I1222 00:22:19.693272 1440600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1222 00:22:19.733914 1440600 command_runner.go:130] > Certificate will not expire
	I1222 00:22:19.734424 1440600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1222 00:22:19.775247 1440600 command_runner.go:130] > Certificate will not expire
	I1222 00:22:19.775751 1440600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1222 00:22:19.816615 1440600 command_runner.go:130] > Certificate will not expire
	I1222 00:22:19.817124 1440600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1222 00:22:19.858237 1440600 command_runner.go:130] > Certificate will not expire
	I1222 00:22:19.858742 1440600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1222 00:22:19.899966 1440600 command_runner.go:130] > Certificate will not expire
	I1222 00:22:19.900073 1440600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1222 00:22:19.941050 1440600 command_runner.go:130] > Certificate will not expire
	I1222 00:22:19.941558 1440600 kubeadm.go:401] StartCluster: {Name:functional-973657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-973657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 00:22:19.941671 1440600 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1222 00:22:19.941755 1440600 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 00:22:19.969312 1440600 cri.go:96] found id: ""
	I1222 00:22:19.969401 1440600 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1222 00:22:19.976791 1440600 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1222 00:22:19.976817 1440600 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1222 00:22:19.976825 1440600 command_runner.go:130] > /var/lib/minikube/etcd:
	I1222 00:22:19.977852 1440600 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1222 00:22:19.977869 1440600 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1222 00:22:19.977970 1440600 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1222 00:22:19.987953 1440600 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1222 00:22:19.988422 1440600 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-973657" does not appear in /home/jenkins/minikube-integration/22179-1395000/kubeconfig
	I1222 00:22:19.988584 1440600 kubeconfig.go:62] /home/jenkins/minikube-integration/22179-1395000/kubeconfig needs updating (will repair): [kubeconfig missing "functional-973657" cluster setting kubeconfig missing "functional-973657" context setting]
	I1222 00:22:19.988906 1440600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/kubeconfig: {Name:mk08a9ce05fb3d2aec66c2d4776a38c1f64c42df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:22:19.989373 1440600 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22179-1395000/kubeconfig
	I1222 00:22:19.989570 1440600 kapi.go:59] client config for functional-973657: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/client.crt", KeyFile:"/home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/client.key", CAFile:"/home/jenkins/minikube-integration/22179-1395000/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2001100), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1222 00:22:19.990226 1440600 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1222 00:22:19.990386 1440600 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1222 00:22:19.990501 1440600 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1222 00:22:19.990531 1440600 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1222 00:22:19.990563 1440600 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1222 00:22:19.990584 1440600 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1222 00:22:19.990915 1440600 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1222 00:22:19.999837 1440600 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1222 00:22:19.999916 1440600 kubeadm.go:602] duration metric: took 22.040118ms to restartPrimaryControlPlane
	I1222 00:22:19.999943 1440600 kubeadm.go:403] duration metric: took 58.401328ms to StartCluster
	I1222 00:22:19.999973 1440600 settings.go:142] acquiring lock: {Name:mk10fbc51e5384d725fd46ab36048358281cb87b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:22:20.000060 1440600 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22179-1395000/kubeconfig
	I1222 00:22:20.000818 1440600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/kubeconfig: {Name:mk08a9ce05fb3d2aec66c2d4776a38c1f64c42df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:22:20.001160 1440600 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1222 00:22:20.001573 1440600 config.go:182] Loaded profile config "functional-973657": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1222 00:22:20.001632 1440600 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1222 00:22:20.001706 1440600 addons.go:70] Setting storage-provisioner=true in profile "functional-973657"
	I1222 00:22:20.001719 1440600 addons.go:239] Setting addon storage-provisioner=true in "functional-973657"
	I1222 00:22:20.001742 1440600 host.go:66] Checking if "functional-973657" exists ...
	I1222 00:22:20.002272 1440600 cli_runner.go:164] Run: docker container inspect functional-973657 --format={{.State.Status}}
	I1222 00:22:20.005335 1440600 addons.go:70] Setting default-storageclass=true in profile "functional-973657"
	I1222 00:22:20.005371 1440600 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-973657"
	I1222 00:22:20.005777 1440600 cli_runner.go:164] Run: docker container inspect functional-973657 --format={{.State.Status}}
	I1222 00:22:20.009418 1440600 out.go:179] * Verifying Kubernetes components...
	I1222 00:22:20.018228 1440600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 00:22:20.049014 1440600 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1222 00:22:20.054188 1440600 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:22:20.054214 1440600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1222 00:22:20.054285 1440600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
	I1222 00:22:20.057022 1440600 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22179-1395000/kubeconfig
	I1222 00:22:20.057199 1440600 kapi.go:59] client config for functional-973657: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/client.crt", KeyFile:"/home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/client.key", CAFile:"/home/jenkins/minikube-integration/22179-1395000/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2001100), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1222 00:22:20.057484 1440600 addons.go:239] Setting addon default-storageclass=true in "functional-973657"
	I1222 00:22:20.057515 1440600 host.go:66] Checking if "functional-973657" exists ...
	I1222 00:22:20.057932 1440600 cli_runner.go:164] Run: docker container inspect functional-973657 --format={{.State.Status}}
	I1222 00:22:20.116105 1440600 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1222 00:22:20.116126 1440600 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1222 00:22:20.116211 1440600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
	I1222 00:22:20.118476 1440600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38390 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/functional-973657/id_rsa Username:docker}
	I1222 00:22:20.150964 1440600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38390 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/functional-973657/id_rsa Username:docker}
	I1222 00:22:20.230950 1440600 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 00:22:20.246813 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:22:20.269038 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:22:20.989713 1440600 node_ready.go:35] waiting up to 6m0s for node "functional-973657" to be "Ready" ...
	I1222 00:22:20.989868 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:20.989910 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:20.989956 1440600 retry.go:84] will retry after 300ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:20.990019 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:20.990037 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:20.990158 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:20.990237 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:20.990539 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:21.220129 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:22:21.281805 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:21.285548 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:21.328766 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:22:21.389895 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:21.389951 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:21.490214 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:21.490305 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:21.490671 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:21.747162 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:22:21.762982 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:22:21.851794 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:21.851892 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:21.874934 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:21.874990 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:21.990352 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:21.990483 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:21.990846 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:22.169304 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:22:22.227981 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:22.231304 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 00:22:22.232314 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:22.293066 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:22.293113 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:22.490400 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:22.490500 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:22.490834 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:22.906334 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:22:22.975672 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:22.975713 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:22.990847 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:22.990933 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:22.991200 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:22:22.991243 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:22:23.106669 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:22:23.165342 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:23.165389 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:23.490828 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:23.490919 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:23.491242 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:23.690784 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:22:23.756600 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:23.760540 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:23.989979 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:23.990075 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:23.990401 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:24.489993 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:24.490100 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:24.490454 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:24.698734 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:22:24.769684 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:24.773516 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:24.989931 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:24.990002 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:24.990372 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:22:24.991642 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:22:25.485320 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:22:25.490209 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:25.490301 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:25.490614 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:25.576354 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:25.576402 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:25.989992 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:25.990101 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:25.990409 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:26.023839 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:22:26.088004 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:26.088050 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:26.490597 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:26.490669 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:26.491019 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:26.990635 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:26.990716 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:26.991074 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:27.490758 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:27.490828 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:27.491160 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:22:27.491213 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:22:27.990564 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:27.990642 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:27.991013 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:28.490658 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:28.490747 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:28.491022 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:28.831344 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:22:28.887027 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:28.890561 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:28.990850 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:28.990934 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:28.991236 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:29.310761 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:22:29.372391 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:29.372466 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:29.490719 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:29.490793 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:29.491132 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:29.989857 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:29.989931 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:29.990237 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:22:29.990280 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:22:30.489971 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:30.490047 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:30.490418 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:30.990341 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:30.990414 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:30.990750 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:31.490503 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:31.490609 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:31.490891 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:31.990673 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:31.990771 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:31.991094 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:22:31.991143 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:22:32.490784 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:32.490857 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:32.491147 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:32.990889 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:32.990957 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:32.991275 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:33.490908 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:33.490983 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:33.491308 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:33.989973 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:33.990048 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:33.990429 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:34.489922 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:34.490003 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:34.490315 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:22:34.490363 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:22:34.729902 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:22:34.785155 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:34.788865 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:34.788903 1440600 retry.go:84] will retry after 5s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:34.990007 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:34.990103 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:34.990461 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:35.490036 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:35.490130 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:35.490475 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:35.603941 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:22:35.664634 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:35.664674 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:35.990278 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:35.990353 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:35.990620 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:36.489989 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:36.490117 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:36.490457 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:22:36.490508 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:22:36.990183 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:36.990309 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:36.990632 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:37.490175 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:37.490265 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:37.490582 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:37.990291 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:37.990369 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:37.990755 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:38.490593 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:38.490669 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:38.491023 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:22:38.491077 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:22:38.990843 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:38.990915 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:38.991302 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:39.489984 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:39.490057 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:39.490378 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:39.827913 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:22:39.886956 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:39.887007 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:39.887042 1440600 retry.go:84] will retry after 5.9s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:39.990214 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:39.990290 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:39.990608 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:40.490186 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:40.490272 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:40.490611 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:40.989992 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:40.990105 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:40.990426 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:22:40.990478 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:22:41.489998 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:41.490103 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:41.490430 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:41.990002 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:41.990092 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:41.990399 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:42.490105 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:42.490198 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:42.490519 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:42.989979 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:42.990061 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:42.990415 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:43.490003 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:43.490101 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:43.490417 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:22:43.490468 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:22:43.989976 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:43.990053 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:43.990428 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:44.490009 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:44.490103 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:44.490462 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:44.689826 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:22:44.747699 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:44.751320 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:44.751358 1440600 retry.go:84] will retry after 11.8s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:44.990747 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:44.990828 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:44.991101 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:45.490873 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:45.491004 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:45.491354 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:22:45.491411 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:22:45.742662 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:22:45.802582 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:45.802622 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:45.990273 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:45.990345 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:45.990615 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:46.489977 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:46.490053 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:46.490376 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:46.990196 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:46.990269 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:46.990588 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:47.490213 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:47.490295 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:47.490626 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:47.990032 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:47.990136 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:47.990574 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:22:47.990636 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:22:48.490291 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:48.490369 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:48.490704 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:48.990450 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:48.990547 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:48.990893 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:49.490743 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:49.490839 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:49.491164 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:49.989920 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:49.990005 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:49.990408 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:50.490031 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:50.490126 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:50.490389 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:22:50.490443 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:22:50.990489 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:50.990566 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:50.990936 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:51.490631 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:51.490764 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:51.491053 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:51.989876 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:51.989962 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:51.990268 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:52.489968 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:52.490042 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:52.490371 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:52.990110 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:52.990196 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:52.990515 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:22:52.990572 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:22:53.490196 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:53.490268 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:53.490563 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:53.990000 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:53.990092 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:53.990415 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:54.490001 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:54.490098 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:54.490420 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:54.989928 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:54.990024 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:54.990327 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:55.234826 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:22:55.295545 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:55.295592 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:55.295617 1440600 retry.go:84] will retry after 23.4s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:55.490907 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:55.490991 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:55.491326 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:22:55.491397 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:22:55.990270 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:55.990351 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:55.990713 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:56.490418 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:56.490484 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:56.490747 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:56.590202 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:22:56.649053 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:56.649099 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:56.990592 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:56.990671 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:56.991011 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:57.490852 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:57.490928 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:57.491243 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:57.989908 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:57.990004 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:57.990332 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:22:57.990391 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:22:58.489983 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:58.490098 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:58.490492 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:58.990013 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:58.990106 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:58.990483 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:59.490202 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:59.490276 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:59.490574 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:59.989956 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:59.990059 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:59.990371 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:22:59.990417 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:00.489992 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:00.490099 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:00.490433 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:00.990353 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:00.990422 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:00.990690 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:01.490534 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:01.490615 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:01.490968 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:01.990810 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:01.990893 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:01.991247 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:01.991307 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:02.489934 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:02.490020 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:02.490365 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:02.989981 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:02.990056 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:02.990424 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:03.490057 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:03.490160 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:03.490510 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:03.990186 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:03.990261 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:03.990567 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:04.489961 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:04.490056 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:04.490388 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:04.490445 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:04.989990 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:04.990063 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:04.990444 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:05.490618 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:05.490690 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:05.491034 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:05.990715 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:05.990804 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:05.991174 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:06.490848 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:06.490927 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:06.491264 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:06.491323 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:06.989959 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:06.990038 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:06.990349 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:07.490001 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:07.490094 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:07.490420 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:07.990138 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:07.990214 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:07.990557 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:08.490207 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:08.490297 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:08.490641 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:08.990354 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:08.990451 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:08.990812 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:08.990863 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:09.490648 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:09.490727 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:09.491063 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:09.990673 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:09.990741 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:09.991016 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:10.490839 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:10.490917 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:10.491255 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:10.990113 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:10.990187 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:10.990542 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:11.490194 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:11.490275 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:11.490617 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:11.490670 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:11.989985 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:11.990065 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:11.990445 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:12.490175 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:12.490250 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:12.490589 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:12.990181 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:12.990252 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:12.990513 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:13.489972 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:13.490070 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:13.490414 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:13.989959 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:13.990036 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:13.990393 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:13.990452 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:14.490128 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:14.490203 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:14.490524 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:14.989972 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:14.990055 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:14.990413 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:15.490003 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:15.490098 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:15.490439 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:15.990388 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:15.990464 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:15.990804 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:15.990869 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:16.088253 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:23:16.150952 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:23:16.151001 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:23:16.151025 1440600 retry.go:84] will retry after 12s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:23:16.490464 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:16.490538 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:16.490881 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:16.990721 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:16.990797 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:16.991127 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:17.489899 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:17.489969 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:17.490299 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:17.990075 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:17.990174 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:17.990513 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:18.489977 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:18.490103 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:18.490446 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:18.490500 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:18.654775 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:23:18.717545 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:23:18.717590 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:23:18.989890 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:18.989961 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:18.990331 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:19.490056 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:19.490166 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:19.490474 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:19.989986 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:19.990059 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:19.990413 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:20.489971 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:20.490043 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:20.490401 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:20.990442 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:20.990522 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:20.990905 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:20.990965 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:21.490561 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:21.490647 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:21.491034 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:21.990814 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:21.990880 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:21.991151 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:22.489859 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:22.489933 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:22.490316 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:22.989907 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:22.990010 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:22.990373 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:23.489920 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:23.489988 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:23.490275 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:23.490318 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:23.990022 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:23.990126 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:23.990454 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:24.489985 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:24.490058 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:24.490398 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:24.990113 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:24.990189 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:24.990527 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:25.489940 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:25.490018 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:25.490368 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:25.490426 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:25.989959 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:25.990037 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:25.990391 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:26.489963 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:26.490033 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:26.490358 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:26.989983 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:26.990062 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:26.990471 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:27.490073 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:27.490176 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:27.490476 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:27.490523 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:27.990208 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:27.990284 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:27.990581 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:28.161122 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:23:28.220514 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:23:28.224335 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:23:28.224393 1440600 retry.go:84] will retry after 41.4s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:23:28.490932 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:28.491004 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:28.491336 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:28.989906 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:28.989984 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:28.990321 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:29.490024 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:29.490113 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:29.490458 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:29.989986 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:29.990094 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:29.990474 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:29.990557 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:30.490047 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:30.490143 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:30.490458 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:30.990259 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:30.990329 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:30.990655 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:31.490002 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:31.490100 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:31.490411 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:31.990014 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:31.990115 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:31.990469 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:32.490018 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:32.490108 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:32.490375 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:32.490418 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:32.990098 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:32.990177 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:32.990500 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:33.490003 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:33.490108 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:33.490466 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:33.990154 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:33.990234 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:33.990566 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:34.490002 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:34.490090 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:34.490440 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:34.490501 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:34.990013 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:34.990107 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:34.990397 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:35.490066 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:35.490157 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:35.490467 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:35.990431 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:35.990505 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:35.990824 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:36.490453 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:36.490528 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:36.490835 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:36.490884 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:36.990603 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:36.990678 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:36.990954 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:37.490735 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:37.490807 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:37.491181 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:37.990857 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:37.990929 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:37.991230 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:38.489948 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:38.490019 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:38.490349 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:38.990008 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:38.990105 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:38.990436 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:38.990495 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:39.489987 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:39.490063 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:39.490393 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:39.989944 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:39.990040 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:39.990363 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:40.489956 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:40.490045 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:40.490412 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:40.990475 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:40.990549 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:40.990889 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:40.990950 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:41.490672 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:41.490741 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:41.491008 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:41.990780 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:41.990856 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:41.991209 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:42.490871 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:42.490954 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:42.491340 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:42.990007 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:42.990097 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:42.990404 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:43.489956 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:43.490032 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:43.490391 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:43.490443 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:43.989997 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:43.990069 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:43.990424 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:44.490113 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:44.490184 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:44.490504 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:44.989987 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:44.990072 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:44.990468 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:45.490042 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:45.490139 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:45.490488 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:45.490544 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:45.990486 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:45.990563 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:45.990841 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:46.490648 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:46.490719 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:46.491036 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:46.990855 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:46.990935 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:46.991321 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:47.489981 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:47.490047 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:47.490334 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:47.990019 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:47.990115 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:47.990452 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:47.990507 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:48.490178 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:48.490257 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:48.490596 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:48.990176 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:48.990258 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:48.990544 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:49.490811 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:49.490901 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:49.491279 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:49.989874 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:49.989946 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:49.990300 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:50.489924 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:50.490000 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:50.490343 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:50.490397 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:50.990356 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:50.990437 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:50.990752 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:51.490553 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:51.490629 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:51.490975 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:51.990784 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:51.990866 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:51.991140 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:52.489871 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:52.489953 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:52.490301 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:52.989984 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:52.990058 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:52.990419 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:52.990479 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:53.490129 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:53.490202 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:53.490518 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:53.990187 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:53.990262 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:53.990609 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:54.489980 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:54.490055 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:54.490439 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:54.989995 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:54.990072 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:54.990372 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:55.490063 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:55.490153 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:55.490516 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:55.490572 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:55.990452 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:55.990534 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:55.990878 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:56.490649 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:56.490717 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:56.490982 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:56.990754 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:56.990838 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:56.991192 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:57.489902 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:57.489983 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:57.490337 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:57.990010 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:57.990101 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:57.990401 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:57.990458 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:58.490135 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:58.490219 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:58.490572 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:58.990291 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:58.990363 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:58.990713 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:59.490474 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:59.490546 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:59.490809 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:59.990637 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:59.990713 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:59.991064 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:59.991122 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:00.490316 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:00.490400 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:00.490862 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:00.990668 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:00.990739 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:00.991087 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:01.490883 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:01.490958 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:01.491270 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:01.989948 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:01.990026 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:01.990325 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:02.490014 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:02.490103 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:02.490477 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:02.490534 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:02.990031 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:02.990131 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:02.990497 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:03.490184 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:03.490262 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:03.490600 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:03.989956 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:03.990032 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:03.990406 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:04.489994 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:04.490072 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:04.490456 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:04.989918 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:04.989996 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:04.990341 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:04.990396 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:05.490050 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:05.490143 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:05.490511 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:05.990488 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:05.990562 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:05.990914 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:06.490753 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:06.490832 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:06.491115 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:06.560484 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:24:06.618784 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:24:06.622419 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:24:06.622526 1440600 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1222 00:24:06.989983 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:06.990064 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:06.990383 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:06.990433 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:07.490157 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:07.490233 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:07.490524 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:07.990185 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:07.990270 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:07.990599 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:08.490022 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:08.490115 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:08.490453 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:08.989981 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:08.990060 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:08.990411 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:08.990466 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:09.490075 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:09.490168 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:09.490514 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:09.609944 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:24:09.674734 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:24:09.674775 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:24:09.674856 1440600 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1222 00:24:09.678207 1440600 out.go:179] * Enabled addons: 
	I1222 00:24:09.681672 1440600 addons.go:530] duration metric: took 1m49.680036347s for enable addons: enabled=[]
	I1222 00:24:09.990006 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:09.990111 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:09.990435 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:10.489989 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:10.490125 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:10.490453 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:10.990344 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:10.990411 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:10.990682 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:10.990727 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:11.490593 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:11.490672 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:11.491056 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:11.990903 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:11.990982 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:11.991278 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:12.489936 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:12.490027 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:12.490339 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:12.990005 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:12.990116 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:12.990431 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:13.489991 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:13.490102 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:13.490441 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:13.490498 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:13.989928 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:13.990002 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:13.990306 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:14.489924 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:14.490000 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:14.490370 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:14.989952 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:14.990029 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:14.990377 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:15.489930 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:15.490007 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:15.495140 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	W1222 00:24:15.495205 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:15.990139 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:15.990214 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:15.990548 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:16.490265 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:16.490341 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:16.490685 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:16.990466 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:16.990534 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:16.990810 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:17.490605 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:17.490678 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:17.491024 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:17.990809 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:17.990888 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:17.991232 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:17.991290 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:18.489966 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:18.490047 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:18.490344 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:18.989974 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:18.990048 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:18.990413 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:19.490152 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:19.490235 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:19.490572 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:19.990193 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:19.990267 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:19.990595 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:20.490013 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:20.490112 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:20.490466 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:20.490529 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:20.990365 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:20.990452 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:20.990874 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:21.490647 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:21.490716 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:21.490974 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:21.990817 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:21.990890 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:21.991258 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:22.489990 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:22.490065 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:22.490412 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:22.989931 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:22.990001 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:22.990291 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:22.990334 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:23.489982 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:23.490072 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:23.490434 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:23.990185 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:23.990260 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:23.990618 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:24.490196 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:24.490263 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:24.490567 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:24.990006 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:24.990100 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:24.990427 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:24.990485 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:25.490302 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:25.490387 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:25.490733 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:25.990623 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:25.990702 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:25.990981 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:26.490787 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:26.490872 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:26.491243 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:26.989906 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:26.989979 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:26.990394 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:27.490103 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:27.490176 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:27.490443 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:27.490482 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:27.989960 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:27.990034 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:27.990413 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:28.490207 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:28.490281 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:28.490622 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:28.990191 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:28.990282 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:28.990631 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:29.490018 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:29.490117 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:29.490494 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:29.490550 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:29.990018 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:29.990122 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:29.990442 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:30.490142 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:30.490217 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:30.490543 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:30.990537 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:30.990608 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:30.990938 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:31.490581 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:31.490653 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:31.490983 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:31.491033 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:31.990778 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:31.990859 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:31.991138 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:32.489907 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:32.489978 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:32.490318 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:32.990007 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:32.990097 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:32.990421 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:33.490061 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:33.490158 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:33.490434 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:33.989970 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:33.990046 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:33.990432 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:33.990495 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:34.490195 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:34.490327 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:34.490668 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:34.990187 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:34.990257 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:34.990639 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:35.490386 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:35.490458 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:35.490812 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:35.990390 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:35.990464 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:35.990796 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:35.990852 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:36.490573 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:36.490651 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:36.490929 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:36.990756 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:36.990829 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:36.991171 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:37.489889 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:37.489967 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:37.490339 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:37.990024 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:37.990114 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:37.990447 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:38.489982 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:38.490058 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:38.490429 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:38.490484 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:38.990183 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:38.990264 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:38.990605 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:39.490191 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:39.490269 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:39.490588 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:39.990227 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:39.990305 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:39.990674 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:40.490443 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:40.490519 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:40.490858 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:40.490915 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:40.990684 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:40.990753 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:40.991032 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:41.490794 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:41.490870 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:41.491216 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:41.989955 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:41.990032 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:41.990370 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:42.489956 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:42.490024 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:42.490310 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:42.989997 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:42.990074 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:42.990458 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:42.990518 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:43.489986 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:43.490062 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:43.490435 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:43.990126 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:43.990202 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:43.990538 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:44.489985 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:44.490061 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:44.490436 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:44.990151 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:44.990227 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:44.990551 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:44.990601 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:45.490191 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:45.490261 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:45.490538 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:45.990641 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:45.990724 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:45.991072 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:46.491707 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:46.491786 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:46.492142 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:46.989879 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:46.989948 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:46.990262 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:47.489938 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:47.490019 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:47.490372 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:47.490436 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:47.989976 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:47.990058 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:47.990414 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:48.490104 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:48.490184 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:48.490506 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:48.989980 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:48.990052 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:48.990405 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:49.490140 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:49.490223 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:49.490576 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:49.490637 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:49.990206 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:49.990276 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:49.990555 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:50.489985 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:50.490065 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:50.490434 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:50.990242 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:50.990326 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:50.990658 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:51.490186 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:51.490262 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:51.490593 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:51.989990 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:51.990069 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:51.990435 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:51.990489 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:52.490183 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:52.490256 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:52.490603 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:52.990255 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:52.990324 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:52.990635 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:53.489964 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:53.490036 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:53.490363 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:53.990001 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:53.990075 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:53.990423 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:54.489924 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:54.489996 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:54.490285 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:54.490333 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:54.989975 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:54.990055 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:54.990468 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:55.490011 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:55.490103 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:55.490421 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:55.990314 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:55.990389 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:55.990657 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:56.490486 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:56.490557 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:56.490869 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:56.490916 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:56.990725 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:56.990802 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:56.991133 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:57.490900 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:57.490974 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:57.491290 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:57.989972 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:57.990051 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:57.990419 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:58.490136 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:58.490217 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:58.490543 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:58.990192 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:58.990263 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:58.990550 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:58.990605 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:59.490266 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:59.490351 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:59.490696 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:59.990527 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:59.990604 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:59.990950 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:00.490941 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:00.491023 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:00.491350 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:00.990417 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:00.990491 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:00.990807 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:00.990855 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:01.490637 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:01.490718 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:01.491102 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:01.990857 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:01.990933 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:01.991226 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:02.489934 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:02.490011 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:02.490376 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:02.990107 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:02.990182 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:02.990528 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:03.490191 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:03.490258 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:03.490603 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:03.490666 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:03.990323 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:03.990398 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:03.990774 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:04.490099 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:04.490181 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:04.490541 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:04.990232 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:04.990304 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:04.990598 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:05.489984 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:05.490070 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:05.490474 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:05.990399 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:05.990480 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:05.990832 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:05.990887 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:06.490635 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:06.490717 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:06.491014 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:06.990839 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:06.990944 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:06.991373 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:07.490014 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:07.490104 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:07.490446 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:07.990134 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:07.990205 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:07.990514 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:08.489962 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:08.490044 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:08.490424 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:08.490484 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:08.990161 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:08.990242 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:08.990563 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:09.490209 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:09.490287 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:09.490623 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:09.989971 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:09.990050 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:09.990419 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:10.490169 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:10.490256 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:10.490641 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:10.490691 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:10.990484 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:10.990556 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:10.990880 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:11.490691 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:11.490770 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:11.491148 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:11.989909 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:11.989998 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:11.990370 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:12.490057 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:12.490147 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:12.490467 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:12.990174 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:12.990252 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:12.990598 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:12.990662 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:13.490189 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:13.490279 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:13.490632 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:13.990187 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:13.990261 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:13.990530 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:14.490218 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:14.490295 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:14.490648 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:14.990225 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:14.990310 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:14.990701 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:14.990757 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:15.490513 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:15.490584 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:15.490919 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:15.990746 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:15.990844 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:15.991183 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:16.489918 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:16.490001 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:16.490332 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:16.989935 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:16.990006 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:16.990305 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:17.489916 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:17.489993 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:17.490353 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:17.490411 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:17.989982 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:17.990067 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:17.990440 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:18.489992 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:18.490059 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:18.490344 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:18.989986 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:18.990062 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:18.990446 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:19.489988 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:19.490067 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:19.490439 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:19.490499 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:19.990000 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:19.990097 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:19.990384 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:20.490061 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:20.490152 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:20.490473 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:20.990446 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:20.990524 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:20.990901 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:21.490562 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:21.490638 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:21.490935 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:21.490983 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:21.990766 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:21.990844 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:21.991203 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:22.489952 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:22.490028 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:22.490383 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:22.989946 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:22.990018 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:22.990395 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:23.490115 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:23.490193 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:23.490519 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:23.990250 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:23.990328 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:23.990658 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:23.990712 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:24.490191 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:24.490259 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:24.490564 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:24.990067 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:24.990171 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:24.990519 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:25.490112 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:25.490189 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:25.490544 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:25.990337 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:25.990408 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:25.990679 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:26.490001 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:26.490100 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:26.490493 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:26.490549 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:26.989985 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:26.990060 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:26.990450 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:27.489933 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:27.490004 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:27.490303 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:27.989990 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:27.990064 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:27.990419 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:28.489996 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:28.490100 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:28.490409 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:28.989926 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:28.990004 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:28.990293 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:28.990335 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:29.489955 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:29.490038 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:29.490419 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:29.989986 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:29.990064 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:29.990428 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:30.490113 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:30.490182 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:30.490466 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:30.990514 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:30.990608 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:30.990968 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:30.991038 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:31.490801 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:31.490876 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:31.491238 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:31.989938 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:31.990010 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:31.990319 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:32.490105 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:32.490200 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:32.490583 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:32.990011 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:32.990107 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:32.990457 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:33.489930 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:33.490001 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:33.490356 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:33.490407 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:33.989968 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:33.990049 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:33.990398 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:34.490108 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:34.490195 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:34.490523 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:34.989925 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:34.990023 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:34.990366 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:35.489971 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:35.490049 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:35.490369 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:35.990125 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:35.990203 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:35.990545 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:35.990599 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:36.490187 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:36.490257 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:36.490569 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:36.990016 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:36.990105 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:36.990430 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:37.490182 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:37.490252 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:37.490574 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:37.990191 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:37.990266 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:37.990607 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:37.990658 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:38.490325 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:38.490406 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:38.490727 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:38.990379 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:38.990457 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:38.990798 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:39.490205 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:39.490279 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:39.490564 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:39.989982 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:39.990073 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:39.990463 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:40.489978 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:40.490060 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:40.490394 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:40.490452 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:40.990349 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:40.990422 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:40.990727 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:41.490014 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:41.490108 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:41.490471 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:41.989961 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:41.990033 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:41.990356 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:42.489960 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:42.490027 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:42.490300 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:42.990007 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:42.990099 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:42.990440 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:42.990502 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:43.490037 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:43.490130 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:43.490432 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:43.989985 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:43.990105 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:43.990442 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:44.489978 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:44.490053 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:44.490424 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:44.990010 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:44.990121 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:44.990473 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:44.990528 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:45.490187 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:45.490258 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:45.490577 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:45.990696 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:45.990768 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:45.991105 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:46.490888 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:46.490961 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:46.491348 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:46.989888 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:46.989967 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:46.990307 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:47.489963 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:47.490035 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:47.490377 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:47.490435 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:47.990002 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:47.990111 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:47.990456 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:48.490130 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:48.490210 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:48.490499 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:48.990197 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:48.990271 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:48.990616 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:49.490301 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:49.490378 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:49.490708 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:49.490767 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:49.990515 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:49.990590 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:49.990888 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:50.490679 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:50.490750 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:50.491091 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:50.990830 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:50.990903 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:50.991524 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:51.490235 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:51.490311 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:51.490637 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:51.990347 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:51.990422 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:51.990726 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:51.990776 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:52.490531 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:52.490608 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:52.490905 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:52.990690 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:52.990761 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:52.991011 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:53.490818 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:53.490896 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:53.491226 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:53.989948 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:53.990031 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:53.990354 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:54.489938 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:54.490015 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:54.490377 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:54.490441 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:54.990011 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:54.990111 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:54.990418 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:55.489958 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:55.490031 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:55.490422 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:55.990385 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:55.990459 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:55.990735 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:56.490568 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:56.490669 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:56.490961 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:56.491006 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:56.990756 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:56.990829 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:56.991170 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:57.489861 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:57.489931 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:57.490260 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:57.989963 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:57.990042 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:57.990407 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:58.490137 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:58.490214 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:58.490541 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:58.990214 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:58.990306 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:58.990624 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:58.990667 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:59.489984 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:59.490062 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:59.490433 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:59.990163 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:59.990236 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:59.990615 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:00.490206 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:00.490289 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:00.490597 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:00.990658 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:00.990731 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:00.991092 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:00.991152 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:01.490884 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:01.490964 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:01.491316 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:01.990041 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:01.990133 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:01.990455 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:02.490830 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:02.490906 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:02.491270 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:02.989994 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:02.990098 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:02.990430 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:03.489931 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:03.490002 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:03.490337 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:03.490390 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:03.990024 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:03.990121 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:03.990467 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:04.490179 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:04.490255 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:04.490608 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:04.990184 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:04.990260 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:04.990581 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:05.489977 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:05.490051 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:05.490398 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:05.490456 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:05.990446 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:05.990523 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:05.990849 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:06.490609 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:06.490678 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:06.490954 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:06.990809 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:06.990889 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:06.991238 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:07.489962 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:07.490037 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:07.490389 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:07.989985 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:07.990051 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:07.990334 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:07.990374 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:08.490043 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:08.490140 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:08.490496 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:08.989967 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:08.990041 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:08.990388 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:09.490116 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:09.490235 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:09.490498 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:09.990254 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:09.990339 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:09.990808 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:09.990886 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:10.490650 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:10.490731 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:10.491042 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:10.990825 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:10.990904 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:10.991173 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:11.489852 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:11.489924 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:11.490252 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:11.989997 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:11.990073 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:11.990444 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:12.490161 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:12.490230 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:12.490508 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:12.490550 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:12.989975 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:12.990050 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:12.990399 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:13.489977 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:13.490061 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:13.490418 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:13.989935 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:13.990009 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:13.990308 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:14.490050 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:14.490144 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:14.490473 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:14.989961 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:14.990046 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:14.990441 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:14.990497 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:15.490170 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:15.490244 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:15.490586 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:15.990615 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:15.990697 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:15.991007 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:16.490796 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:16.490870 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:16.491907 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:16.990669 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:16.990740 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:16.991032 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:16.991078 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:17.490833 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:17.490913 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:17.491260 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:17.989966 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:17.990045 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:17.990373 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:18.489932 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:18.490002 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:18.490301 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:18.989992 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:18.990073 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:18.990433 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:19.490149 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:19.490224 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:19.490593 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:19.490656 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:19.990226 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:19.990300 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:19.990617 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:20.489972 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:20.490047 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:20.490362 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:20.990101 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:20.990177 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:20.990509 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:21.490185 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:21.490264 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:21.490595 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:21.989954 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:21.990032 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:21.990395 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:21.990455 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:22.489962 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:22.490045 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:22.490385 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:22.989930 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:22.990003 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:22.990341 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:23.490027 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:23.490129 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:23.490477 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:23.990190 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:23.990277 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:23.990691 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:23.990747 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:24.490514 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:24.490583 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:24.490927 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:24.990720 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:24.990794 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:24.991140 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:25.490900 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:25.490979 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:25.491379 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:25.990094 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:25.990175 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:25.990521 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:26.490373 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:26.490449 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:26.490797 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:26.490855 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:26.990580 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:26.990656 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:26.991034 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:27.490724 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:27.490790 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:27.491046 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:27.990825 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:27.990904 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:27.991259 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:28.490911 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:28.490985 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:28.491318 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:28.491372 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:28.989928 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:28.990007 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:28.990342 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:29.489980 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:29.490066 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:29.490444 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:29.989982 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:29.990058 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:29.990411 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:30.490133 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:30.490213 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:30.490478 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:30.990413 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:30.990487 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:30.990822 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:30.990881 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:31.490635 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:31.490709 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:31.491051 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:31.990823 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:31.990893 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:31.991165 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:32.490952 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:32.491029 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:32.491373 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:32.989955 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:32.990035 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:32.990378 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:33.489931 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:33.490002 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:33.490316 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:33.490362 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:33.989947 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:33.990021 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:33.990372 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:34.490116 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:34.490197 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:34.490500 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:34.989909 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:34.989977 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:34.990263 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:35.489946 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:35.490026 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:35.490341 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:35.490389 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:35.990283 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:35.990360 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:35.990662 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:36.490184 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:36.490280 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:36.490578 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:36.990001 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:36.990095 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:36.990429 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:37.490145 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:37.490220 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:37.490554 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:37.490609 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:37.990015 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:37.990097 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:37.990389 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:38.490110 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:38.490185 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:38.490492 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:38.990019 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:38.990118 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:38.990456 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:39.490153 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:39.490226 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:39.490496 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:39.990199 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:39.990276 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:39.990657 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:39.990715 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:40.490392 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:40.490536 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:40.490920 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:40.990760 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:40.990841 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:40.991131 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:41.490923 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:41.490995 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:41.491302 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:41.990033 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:41.990133 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:41.990472 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:42.490153 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:42.490221 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:42.490485 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:42.490527 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:42.990002 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:42.990101 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:42.990413 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:43.489980 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:43.490064 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:43.490432 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:43.989938 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:43.990011 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:43.990364 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:44.489959 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:44.490042 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:44.490416 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:44.989995 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:44.990102 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:44.990462 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:44.990519 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:45.490179 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:45.490260 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:45.490574 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:45.990602 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:45.990682 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:45.991037 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:46.490835 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:46.490909 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:46.491279 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:46.989941 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:46.990018 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:46.990385 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:47.489947 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:47.490027 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:47.490374 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:47.490435 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:47.989971 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:47.990053 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:47.990422 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:48.490143 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:48.490233 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:48.490499 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:48.990174 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:48.990256 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:48.990580 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:49.490307 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:49.490383 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:49.490720 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:49.490772 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:49.990521 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:49.990599 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:49.990879 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:50.490635 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:50.490716 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:50.491079 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:50.990724 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:50.990795 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:50.991088 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:51.490793 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:51.490872 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:51.491153 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:51.491198 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:51.989975 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:51.990061 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:51.990446 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:52.489977 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:52.490057 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:52.490428 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:52.990118 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:52.990246 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:52.990561 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:53.489966 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:53.490052 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:53.490415 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:53.989985 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:53.990063 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:53.990421 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:53.990481 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:54.489936 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:54.490008 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:54.490340 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:54.989971 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:54.990045 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:54.990421 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:55.490133 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:55.490213 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:55.490553 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:55.990361 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:55.990433 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:55.990726 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:55.990770 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:56.490522 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:56.490596 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:56.490941 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:56.990624 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:56.990700 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:56.991017 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:57.490621 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:57.490692 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:57.490956 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:57.990832 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:57.990908 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:57.991282 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:57.991347 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:58.489976 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:58.490053 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:58.490386 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:58.989930 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:58.989998 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:58.990284 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:59.489983 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:59.490073 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:59.490464 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:59.990192 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:59.990275 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:59.990617 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:00.490281 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:00.490364 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:00.490677 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:00.490726 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:00.990700 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:00.990777 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:00.991140 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:01.490816 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:01.490903 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:01.491267 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:01.989976 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:01.990054 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:01.990417 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:02.489984 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:02.490060 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:02.490437 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:02.990175 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:02.990253 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:02.990605 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:02.990670 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:03.490193 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:03.490272 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:03.490631 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:03.990322 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:03.990405 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:03.990824 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:04.490523 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:04.490601 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:04.490958 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:04.990688 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:04.990756 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:04.991031 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:04.991073 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:05.490786 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:05.490863 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:05.491193 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:05.990885 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:05.990960 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:05.991336 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:06.489989 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:06.490060 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:06.490367 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:06.990048 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:06.990148 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:06.990505 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:07.490226 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:07.490304 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:07.490653 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:07.490707 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:07.990247 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:07.990320 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:07.990637 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:08.489976 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:08.490052 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:08.490406 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:08.990024 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:08.990129 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:08.990481 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:09.489938 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:09.490010 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:09.490353 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:09.989955 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:09.990030 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:09.990415 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:09.990473 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:10.490149 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:10.490228 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:10.490601 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:10.990609 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:10.990681 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:10.990963 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:11.490826 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:11.490912 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:11.491261 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:11.989991 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:11.990068 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:11.990456 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:11.990514 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:12.489967 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:12.490038 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:12.490323 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:12.989997 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:12.990096 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:12.990418 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:13.489963 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:13.490038 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:13.490407 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:13.990095 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:13.990171 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:13.990492 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:13.990549 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:14.489992 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:14.490067 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:14.490428 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:14.990144 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:14.990224 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:14.990592 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:15.490207 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:15.490277 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:15.490570 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:15.990543 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:15.990628 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:15.991069 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:15.991135 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:16.490872 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:16.490956 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:16.491310 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:16.989967 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:16.990053 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:16.990388 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:17.490123 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:17.490206 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:17.490561 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:17.990295 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:17.990377 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:17.990730 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:18.490453 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:18.490522 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:18.490787 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:18.490828 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:18.990605 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:18.990682 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:18.991041 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:19.490876 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:19.490953 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:19.491342 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:19.990007 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:19.990107 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:19.990413 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:20.489981 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:20.490059 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:20.490407 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:20.990447 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:20.990519 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:20.990864 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:20.990919 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:21.490621 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:21.490700 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:21.490968 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:21.990741 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:21.990818 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:21.991152 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:22.490904 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:22.490981 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:22.491320 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:22.989871 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:22.989939 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:22.990221 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:23.489972 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:23.490045 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:23.490356 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:23.490418 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:23.989980 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:23.990055 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:23.990419 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:24.489968 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:24.490036 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:24.490402 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:24.989977 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:24.990052 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:24.990381 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:25.489961 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:25.490041 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:25.490383 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:25.989925 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:25.989999 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:25.990314 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:25.990363 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:26.490009 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:26.490096 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:26.490426 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:26.990011 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:26.990100 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:26.990405 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:27.490090 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:27.490172 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:27.490501 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:27.989983 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:27.990059 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:27.990431 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:27.990490 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:28.490022 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:28.490121 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:28.490494 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:28.990112 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:28.990185 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:28.990505 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:29.490221 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:29.490292 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:29.490664 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:29.990003 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:29.990074 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:29.990426 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:30.490150 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:30.490225 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:30.490541 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:30.490592 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:30.990489 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:30.990567 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:30.990902 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:31.490527 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:31.490606 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:31.490937 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:31.990701 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:31.990772 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:31.991052 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:32.490780 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:32.490863 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:32.491194 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:32.491251 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:32.989979 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:32.990061 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:32.990468 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:33.489931 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:33.490000 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:33.490333 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:33.989999 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:33.990090 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:33.990434 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:34.490164 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:34.490237 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:34.490525 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:34.990013 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:34.990133 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:34.990419 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:34.990463 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:35.489998 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:35.490072 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:35.490402 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:35.990424 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:35.990500 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:35.990847 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:36.490629 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:36.490699 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:36.491002 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:36.990786 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:36.990862 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:36.991205 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:36.991272 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:37.489933 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:37.490007 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:37.490341 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:37.990047 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:37.990142 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:37.990426 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:38.489979 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:38.490062 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:38.490435 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:38.990029 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:38.990127 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:38.990498 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:39.490187 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:39.490257 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:39.490523 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:39.490565 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:39.989972 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:39.990052 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:39.990405 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:40.490020 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:40.490112 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:40.490458 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:40.990313 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:40.990393 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:40.990738 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:41.490512 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:41.490590 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:41.490933 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:41.490989 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:41.990751 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:41.990828 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:41.991149 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:42.489855 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:42.489927 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:42.490220 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:42.989969 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:42.990051 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:42.990392 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:43.489963 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:43.490039 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:43.490355 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:43.989943 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:43.990021 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:43.990364 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:43.990421 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:44.490068 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:44.490167 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:44.490496 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:44.989975 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:44.990048 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:44.990405 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:45.490098 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:45.490165 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:45.490445 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:45.990499 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:45.990579 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:45.990932 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:45.990988 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:46.490765 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:46.490851 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:46.491199 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:46.989903 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:46.989975 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:46.990267 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:47.489959 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:47.490040 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:47.490410 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:47.990193 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:47.990272 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:47.990633 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:48.490191 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:48.490268 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:48.490557 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:48.490605 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:48.989973 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:48.990054 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:48.990411 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:49.490121 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:49.490203 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:49.490602 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:49.990180 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:49.990256 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:49.990573 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:50.490264 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:50.490334 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:50.490701 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:50.490762 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:50.990600 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:50.990676 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:50.991023 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:51.490816 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:51.490888 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:51.491166 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:51.989883 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:51.989958 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:51.990326 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:52.489958 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:52.490029 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:52.490386 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:52.989930 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:52.990008 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:52.990311 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:52.990362 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:53.489946 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:53.490023 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:53.490371 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:53.989949 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:53.990030 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:53.990375 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:54.490788 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:54.490862 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:54.491123 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:54.990890 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:54.990969 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:54.991274 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:54.991322 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:55.489946 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:55.490034 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:55.490395 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:55.990183 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:55.990253 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:55.990532 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:56.490206 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:56.490281 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:56.490594 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:56.989987 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:56.990060 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:56.990406 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:57.490128 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:57.490205 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:57.490478 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:57.490521 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:57.990207 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:57.990282 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:57.990685 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:58.490512 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:58.490591 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:58.490950 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:58.990719 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:58.990795 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:58.991070 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:59.490852 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:59.490926 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:59.491272 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:59.491325 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:59.989994 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:59.990075 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:59.990477 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:00.491169 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:00.491258 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:00.491580 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:00.990547 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:00.990624 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:00.991006 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:01.490798 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:01.490875 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:01.491244 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:01.989992 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:01.990060 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:01.990372 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:01.990421 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:02.490072 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:02.490171 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:02.490504 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:02.990211 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:02.990296 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:02.990636 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:03.490179 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:03.490250 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:03.490508 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:03.989979 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:03.990056 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:03.990393 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:03.990451 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:04.489988 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:04.490064 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:04.490417 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:04.989923 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:04.990006 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:04.990341 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:05.490018 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:05.490111 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:05.490452 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:05.989994 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:05.990072 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:05.990422 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:05.990476 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:06.490127 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:06.490204 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:06.490473 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:06.989958 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:06.990037 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:06.990380 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:07.489982 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:07.490060 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:07.490431 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:07.990121 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:07.990189 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:07.990482 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:07.990525 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:08.489974 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:08.490066 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:08.490425 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:08.990156 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:08.990234 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:08.990576 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:09.490202 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:09.490271 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:09.490542 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:09.989972 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:09.990045 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:09.990377 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:10.490071 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:10.490165 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:10.490532 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:10.490597 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:10.990314 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:10.990397 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:10.990739 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:11.490506 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:11.490591 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:11.490943 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:11.990748 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:11.990830 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:11.991167 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:12.489852 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:12.489923 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:12.490225 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:12.989970 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:12.990044 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:12.990403 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:12.990470 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:13.489984 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:13.490059 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:13.490416 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:13.990123 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:13.990198 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:13.990462 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:14.489978 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:14.490054 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:14.490452 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:14.990189 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:14.990264 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:14.990627 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:14.990691 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:15.490186 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:15.490254 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:15.490558 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:15.990404 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:15.990479 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:15.990821 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:16.490635 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:16.490717 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:16.491027 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:16.990834 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:16.990930 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:16.991327 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:16.991378 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:17.489874 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:17.489955 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:17.490319 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:17.990029 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:17.990129 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:17.990461 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:18.489952 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:18.490024 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:18.490359 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:18.989974 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:18.990051 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:18.990415 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:19.489997 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:19.490101 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:19.490461 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:19.490518 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:19.990172 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:19.990243 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:19.990549 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:20.489961 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:20.490039 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:20.490384 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:20.990305 1440600 node_ready.go:38] duration metric: took 6m0.000552396s for node "functional-973657" to be "Ready" ...
	I1222 00:28:20.993510 1440600 out.go:203] 
	W1222 00:28:20.996431 1440600 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1222 00:28:20.996456 1440600 out.go:285] * 
	W1222 00:28:20.998594 1440600 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1222 00:28:21.002257 1440600 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 22 00:22:18 functional-973657 containerd[5251]: time="2025-12-22T00:22:18.298792815Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 22 00:22:18 functional-973657 containerd[5251]: time="2025-12-22T00:22:18.298864643Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 22 00:22:18 functional-973657 containerd[5251]: time="2025-12-22T00:22:18.298978129Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 22 00:22:18 functional-973657 containerd[5251]: time="2025-12-22T00:22:18.299056694Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 22 00:22:18 functional-973657 containerd[5251]: time="2025-12-22T00:22:18.299121474Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 22 00:22:18 functional-973657 containerd[5251]: time="2025-12-22T00:22:18.299191005Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 22 00:22:18 functional-973657 containerd[5251]: time="2025-12-22T00:22:18.299269176Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 22 00:22:18 functional-973657 containerd[5251]: time="2025-12-22T00:22:18.299347667Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 22 00:22:18 functional-973657 containerd[5251]: time="2025-12-22T00:22:18.299422236Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 22 00:22:18 functional-973657 containerd[5251]: time="2025-12-22T00:22:18.299509260Z" level=info msg="Connect containerd service"
	Dec 22 00:22:18 functional-973657 containerd[5251]: time="2025-12-22T00:22:18.299903955Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 22 00:22:18 functional-973657 containerd[5251]: time="2025-12-22T00:22:18.300613138Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 22 00:22:18 functional-973657 containerd[5251]: time="2025-12-22T00:22:18.311856100Z" level=info msg="Start subscribing containerd event"
	Dec 22 00:22:18 functional-973657 containerd[5251]: time="2025-12-22T00:22:18.312075679Z" level=info msg="Start recovering state"
	Dec 22 00:22:18 functional-973657 containerd[5251]: time="2025-12-22T00:22:18.312037500Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 22 00:22:18 functional-973657 containerd[5251]: time="2025-12-22T00:22:18.312330451Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 22 00:22:18 functional-973657 containerd[5251]: time="2025-12-22T00:22:18.349120777Z" level=info msg="Start event monitor"
	Dec 22 00:22:18 functional-973657 containerd[5251]: time="2025-12-22T00:22:18.349322698Z" level=info msg="Start cni network conf syncer for default"
	Dec 22 00:22:18 functional-973657 containerd[5251]: time="2025-12-22T00:22:18.349392640Z" level=info msg="Start streaming server"
	Dec 22 00:22:18 functional-973657 containerd[5251]: time="2025-12-22T00:22:18.349460546Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 22 00:22:18 functional-973657 containerd[5251]: time="2025-12-22T00:22:18.349519738Z" level=info msg="runtime interface starting up..."
	Dec 22 00:22:18 functional-973657 containerd[5251]: time="2025-12-22T00:22:18.349574885Z" level=info msg="starting plugins..."
	Dec 22 00:22:18 functional-973657 containerd[5251]: time="2025-12-22T00:22:18.349639797Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 22 00:22:18 functional-973657 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 22 00:22:18 functional-973657 containerd[5251]: time="2025-12-22T00:22:18.351883541Z" level=info msg="containerd successfully booted in 0.079188s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:28:25.140386    8634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:28:25.141267    8634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:28:25.143020    8634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:28:25.143396    8634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:28:25.144913    8634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec21 21:36] overlayfs: idmapped layers are currently not supported
	[ +34.220408] overlayfs: idmapped layers are currently not supported
	[Dec21 21:37] overlayfs: idmapped layers are currently not supported
	[Dec21 21:38] overlayfs: idmapped layers are currently not supported
	[Dec21 21:39] overlayfs: idmapped layers are currently not supported
	[ +42.728151] overlayfs: idmapped layers are currently not supported
	[Dec21 21:40] overlayfs: idmapped layers are currently not supported
	[Dec21 21:42] overlayfs: idmapped layers are currently not supported
	[Dec21 21:43] overlayfs: idmapped layers are currently not supported
	[Dec21 22:01] overlayfs: idmapped layers are currently not supported
	[Dec21 22:03] overlayfs: idmapped layers are currently not supported
	[Dec21 22:04] overlayfs: idmapped layers are currently not supported
	[Dec21 22:06] overlayfs: idmapped layers are currently not supported
	[Dec21 22:07] overlayfs: idmapped layers are currently not supported
	[Dec21 22:09] kauditd_printk_skb: 8 callbacks suppressed
	[Dec21 22:19] FS-Cache: Duplicate cookie detected
	[  +0.000799] FS-Cache: O-cookie c=000001b7 [p=00000002 fl=222 nc=0 na=1]
	[  +0.000997] FS-Cache: O-cookie d=000000006644c6a1{9P.session} n=0000000059d48210
	[  +0.001156] FS-Cache: O-key=[10] '34333231303139373837'
	[  +0.000780] FS-Cache: N-cookie c=000001b8 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000956] FS-Cache: N-cookie d=000000006644c6a1{9P.session} n=000000007a8030ee
	[  +0.001187] FS-Cache: N-key=[10] '34333231303139373837'
	[Dec22 00:03] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 00:28:25 up 1 day,  7:10,  0 user,  load average: 0.19, 0.29, 0.87
	Linux functional-973657 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 22 00:28:21 functional-973657 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 00:28:22 functional-973657 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 810.
	Dec 22 00:28:22 functional-973657 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:28:22 functional-973657 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:28:22 functional-973657 kubelet[8411]: E1222 00:28:22.313936    8411 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 00:28:22 functional-973657 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 00:28:22 functional-973657 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 00:28:22 functional-973657 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 811.
	Dec 22 00:28:22 functional-973657 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:28:22 functional-973657 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:28:23 functional-973657 kubelet[8509]: E1222 00:28:23.051649    8509 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 00:28:23 functional-973657 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 00:28:23 functional-973657 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 00:28:23 functional-973657 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 812.
	Dec 22 00:28:23 functional-973657 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:28:23 functional-973657 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:28:23 functional-973657 kubelet[8527]: E1222 00:28:23.751830    8527 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 00:28:23 functional-973657 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 00:28:23 functional-973657 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 00:28:24 functional-973657 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 813.
	Dec 22 00:28:24 functional-973657 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:28:24 functional-973657 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:28:24 functional-973657 kubelet[8550]: E1222 00:28:24.557192    8550 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 00:28:24 functional-973657 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 00:28:24 functional-973657 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-973657 -n functional-973657
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-973657 -n functional-973657: exit status 2 (346.692202ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-973657" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods (2.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd (2.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 kubectl -- --context functional-973657 get pods
functional_test.go:731: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-973657 kubectl -- --context functional-973657 get pods: exit status 1 (111.405925ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:734: failed to get pods. args "out/minikube-linux-arm64 -p functional-973657 kubectl -- --context functional-973657 get pods": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-973657
helpers_test.go:244: (dbg) docker inspect functional-973657:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "66803363da2c02a83814dda1c0764d3abdab5acc630ac08f6a997102221d51a1",
	        "Created": "2025-12-22T00:13:58.968084222Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1435135,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-22T00:13:59.032592154Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:065a636b8735485f57df2b02ed6532902f189a9c5dc304ae0ae68a778e1c9b2c",
	        "ResolvConfPath": "/var/lib/docker/containers/66803363da2c02a83814dda1c0764d3abdab5acc630ac08f6a997102221d51a1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/66803363da2c02a83814dda1c0764d3abdab5acc630ac08f6a997102221d51a1/hostname",
	        "HostsPath": "/var/lib/docker/containers/66803363da2c02a83814dda1c0764d3abdab5acc630ac08f6a997102221d51a1/hosts",
	        "LogPath": "/var/lib/docker/containers/66803363da2c02a83814dda1c0764d3abdab5acc630ac08f6a997102221d51a1/66803363da2c02a83814dda1c0764d3abdab5acc630ac08f6a997102221d51a1-json.log",
	        "Name": "/functional-973657",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-973657:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-973657",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "66803363da2c02a83814dda1c0764d3abdab5acc630ac08f6a997102221d51a1",
	                "LowerDir": "/var/lib/docker/overlay2/5e1ad55fc7940958673405b2a5d9d7701d300a0b94ebc0c871b8eb28331634c7-init/diff:/var/lib/docker/overlay2/fdfc0dccbb3b6766b7981a5669907f62e120bbe774767ccbee34e6115374625f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5e1ad55fc7940958673405b2a5d9d7701d300a0b94ebc0c871b8eb28331634c7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5e1ad55fc7940958673405b2a5d9d7701d300a0b94ebc0c871b8eb28331634c7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5e1ad55fc7940958673405b2a5d9d7701d300a0b94ebc0c871b8eb28331634c7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-973657",
	                "Source": "/var/lib/docker/volumes/functional-973657/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-973657",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-973657",
	                "name.minikube.sigs.k8s.io": "functional-973657",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9a7415dc7cc8c69d402ee20ae768f9dea6a8f4f19a78dc21532ae8d42f3e7899",
	            "SandboxKey": "/var/run/docker/netns/9a7415dc7cc8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38390"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38391"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38394"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38392"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38393"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-973657": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ee:06:b1:ad:4a:31",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1e42e4d66b505bfd04d5446c52717be20340997733545e1d4203ef38f80c0dbb",
	                    "EndpointID": "cfb1e4a2d5409c3c16cc85466e1253884fb8124967c81f53a3b06011e2792928",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-973657",
	                        "66803363da2c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-973657 -n functional-973657
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-973657 -n functional-973657: exit status 2 (308.374749ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                         ARGS                                                                          │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-722318 image ls --format short --alsologtostderr                                                                                           │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ 22 Dec 25 00:13 UTC │
	│ image   │ functional-722318 image ls --format yaml --alsologtostderr                                                                                            │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ 22 Dec 25 00:13 UTC │
	│ ssh     │ functional-722318 ssh pgrep buildkitd                                                                                                                 │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │                     │
	│ image   │ functional-722318 image build -t localhost/my-image:functional-722318 testdata/build --alsologtostderr                                                │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ 22 Dec 25 00:13 UTC │
	│ image   │ functional-722318 image ls --format json --alsologtostderr                                                                                            │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ 22 Dec 25 00:13 UTC │
	│ image   │ functional-722318 image ls --format table --alsologtostderr                                                                                           │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ 22 Dec 25 00:13 UTC │
	│ image   │ functional-722318 image ls                                                                                                                            │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ 22 Dec 25 00:13 UTC │
	│ delete  │ -p functional-722318                                                                                                                                  │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ 22 Dec 25 00:13 UTC │
	│ start   │ -p functional-973657 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │                     │
	│ start   │ -p functional-973657 --alsologtostderr -v=8                                                                                                           │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:22 UTC │                     │
	│ cache   │ functional-973657 cache add registry.k8s.io/pause:3.1                                                                                                 │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:28 UTC │ 22 Dec 25 00:28 UTC │
	│ cache   │ functional-973657 cache add registry.k8s.io/pause:3.3                                                                                                 │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:28 UTC │ 22 Dec 25 00:28 UTC │
	│ cache   │ functional-973657 cache add registry.k8s.io/pause:latest                                                                                              │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:28 UTC │ 22 Dec 25 00:28 UTC │
	│ cache   │ functional-973657 cache add minikube-local-cache-test:functional-973657                                                                               │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:28 UTC │ 22 Dec 25 00:28 UTC │
	│ cache   │ functional-973657 cache delete minikube-local-cache-test:functional-973657                                                                            │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:28 UTC │ 22 Dec 25 00:28 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                      │ minikube          │ jenkins │ v1.37.0 │ 22 Dec 25 00:28 UTC │ 22 Dec 25 00:28 UTC │
	│ cache   │ list                                                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 22 Dec 25 00:28 UTC │ 22 Dec 25 00:28 UTC │
	│ ssh     │ functional-973657 ssh sudo crictl images                                                                                                              │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:28 UTC │ 22 Dec 25 00:28 UTC │
	│ ssh     │ functional-973657 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                    │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:28 UTC │ 22 Dec 25 00:28 UTC │
	│ ssh     │ functional-973657 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                               │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:28 UTC │                     │
	│ cache   │ functional-973657 cache reload                                                                                                                        │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:28 UTC │ 22 Dec 25 00:28 UTC │
	│ ssh     │ functional-973657 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                               │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:28 UTC │ 22 Dec 25 00:28 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                      │ minikube          │ jenkins │ v1.37.0 │ 22 Dec 25 00:28 UTC │ 22 Dec 25 00:28 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                   │ minikube          │ jenkins │ v1.37.0 │ 22 Dec 25 00:28 UTC │ 22 Dec 25 00:28 UTC │
	│ kubectl │ functional-973657 kubectl -- --context functional-973657 get pods                                                                                     │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:28 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/22 00:22:15
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1222 00:22:15.745982 1440600 out.go:360] Setting OutFile to fd 1 ...
	I1222 00:22:15.746211 1440600 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:22:15.746249 1440600 out.go:374] Setting ErrFile to fd 2...
	I1222 00:22:15.746270 1440600 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:22:15.746555 1440600 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1395000/.minikube/bin
	I1222 00:22:15.747001 1440600 out.go:368] Setting JSON to false
	I1222 00:22:15.747938 1440600 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":111889,"bootTime":1766251047,"procs":168,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1222 00:22:15.748043 1440600 start.go:143] virtualization:  
	I1222 00:22:15.753569 1440600 out.go:179] * [functional-973657] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1222 00:22:15.756598 1440600 out.go:179]   - MINIKUBE_LOCATION=22179
	I1222 00:22:15.756741 1440600 notify.go:221] Checking for updates...
	I1222 00:22:15.762722 1440600 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 00:22:15.765671 1440600 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-1395000/kubeconfig
	I1222 00:22:15.768657 1440600 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1395000/.minikube
	I1222 00:22:15.771623 1440600 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1222 00:22:15.774619 1440600 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 00:22:15.777830 1440600 config.go:182] Loaded profile config "functional-973657": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1222 00:22:15.777978 1440600 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 00:22:15.812917 1440600 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1222 00:22:15.813051 1440600 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 00:22:15.874179 1440600 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 00:22:15.864674601 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 00:22:15.874289 1440600 docker.go:319] overlay module found
	I1222 00:22:15.877302 1440600 out.go:179] * Using the docker driver based on existing profile
	I1222 00:22:15.880104 1440600 start.go:309] selected driver: docker
	I1222 00:22:15.880124 1440600 start.go:928] validating driver "docker" against &{Name:functional-973657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-973657 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableC
oreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 00:22:15.880226 1440600 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 00:22:15.880331 1440600 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 00:22:15.936346 1440600 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 00:22:15.927222796 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 00:22:15.936748 1440600 cni.go:84] Creating CNI manager for ""
	I1222 00:22:15.936818 1440600 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1222 00:22:15.936877 1440600 start.go:353] cluster config:
	{Name:functional-973657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-973657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 00:22:15.939915 1440600 out.go:179] * Starting "functional-973657" primary control-plane node in "functional-973657" cluster
	I1222 00:22:15.942690 1440600 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1222 00:22:15.945666 1440600 out.go:179] * Pulling base image v0.0.48-1766219634-22260 ...
	I1222 00:22:15.948535 1440600 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1222 00:22:15.948600 1440600 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4
	I1222 00:22:15.948615 1440600 cache.go:65] Caching tarball of preloaded images
	I1222 00:22:15.948645 1440600 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1222 00:22:15.948702 1440600 preload.go:251] Found /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1222 00:22:15.948713 1440600 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on containerd
	I1222 00:22:15.948830 1440600 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/config.json ...
	I1222 00:22:15.969249 1440600 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon, skipping pull
	I1222 00:22:15.969274 1440600 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 exists in daemon, skipping load
	I1222 00:22:15.969294 1440600 cache.go:243] Successfully downloaded all kic artifacts
	I1222 00:22:15.969326 1440600 start.go:360] acquireMachinesLock for functional-973657: {Name:mk23c5c2a3abc92310900db50002bad061b76c2b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 00:22:15.969396 1440600 start.go:364] duration metric: took 41.633µs to acquireMachinesLock for "functional-973657"
	I1222 00:22:15.969420 1440600 start.go:96] Skipping create...Using existing machine configuration
	I1222 00:22:15.969432 1440600 fix.go:54] fixHost starting: 
	I1222 00:22:15.969697 1440600 cli_runner.go:164] Run: docker container inspect functional-973657 --format={{.State.Status}}
	I1222 00:22:15.991071 1440600 fix.go:112] recreateIfNeeded on functional-973657: state=Running err=<nil>
	W1222 00:22:15.991104 1440600 fix.go:138] unexpected machine state, will restart: <nil>
	I1222 00:22:15.994289 1440600 out.go:252] * Updating the running docker "functional-973657" container ...
	I1222 00:22:15.994325 1440600 machine.go:94] provisionDockerMachine start ...
	I1222 00:22:15.994407 1440600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
	I1222 00:22:16.016696 1440600 main.go:144] libmachine: Using SSH client type: native
	I1222 00:22:16.017052 1440600 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38390 <nil> <nil>}
	I1222 00:22:16.017069 1440600 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 00:22:16.150117 1440600 main.go:144] libmachine: SSH cmd err, output: <nil>: functional-973657
	
	I1222 00:22:16.150145 1440600 ubuntu.go:182] provisioning hostname "functional-973657"
	I1222 00:22:16.150214 1440600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
	I1222 00:22:16.171110 1440600 main.go:144] libmachine: Using SSH client type: native
	I1222 00:22:16.171503 1440600 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38390 <nil> <nil>}
	I1222 00:22:16.171525 1440600 main.go:144] libmachine: About to run SSH command:
	sudo hostname functional-973657 && echo "functional-973657" | sudo tee /etc/hostname
	I1222 00:22:16.320804 1440600 main.go:144] libmachine: SSH cmd err, output: <nil>: functional-973657
	
	I1222 00:22:16.320911 1440600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
	I1222 00:22:16.341102 1440600 main.go:144] libmachine: Using SSH client type: native
	I1222 00:22:16.341468 1440600 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38390 <nil> <nil>}
	I1222 00:22:16.341492 1440600 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-973657' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-973657/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-973657' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1222 00:22:16.474666 1440600 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 00:22:16.474761 1440600 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22179-1395000/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-1395000/.minikube}
	I1222 00:22:16.474804 1440600 ubuntu.go:190] setting up certificates
	I1222 00:22:16.474823 1440600 provision.go:84] configureAuth start
	I1222 00:22:16.474894 1440600 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-973657
	I1222 00:22:16.493393 1440600 provision.go:143] copyHostCerts
	I1222 00:22:16.493439 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.pem
	I1222 00:22:16.493474 1440600 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.pem, removing ...
	I1222 00:22:16.493495 1440600 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.pem
	I1222 00:22:16.493578 1440600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.pem (1082 bytes)
	I1222 00:22:16.493680 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22179-1395000/.minikube/cert.pem
	I1222 00:22:16.493704 1440600 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1395000/.minikube/cert.pem, removing ...
	I1222 00:22:16.493715 1440600 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1395000/.minikube/cert.pem
	I1222 00:22:16.493744 1440600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-1395000/.minikube/cert.pem (1123 bytes)
	I1222 00:22:16.493808 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22179-1395000/.minikube/key.pem
	I1222 00:22:16.493831 1440600 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1395000/.minikube/key.pem, removing ...
	I1222 00:22:16.493837 1440600 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1395000/.minikube/key.pem
	I1222 00:22:16.493863 1440600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-1395000/.minikube/key.pem (1679 bytes)
	I1222 00:22:16.493929 1440600 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca-key.pem org=jenkins.functional-973657 san=[127.0.0.1 192.168.49.2 functional-973657 localhost minikube]
	I1222 00:22:16.688332 1440600 provision.go:177] copyRemoteCerts
	I1222 00:22:16.688423 1440600 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1222 00:22:16.688474 1440600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
	I1222 00:22:16.708412 1440600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38390 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/functional-973657/id_rsa Username:docker}
	I1222 00:22:16.807036 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1222 00:22:16.807104 1440600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1222 00:22:16.826203 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1222 00:22:16.826269 1440600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1222 00:22:16.844818 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1222 00:22:16.844882 1440600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1222 00:22:16.862814 1440600 provision.go:87] duration metric: took 387.965654ms to configureAuth
	I1222 00:22:16.862846 1440600 ubuntu.go:206] setting minikube options for container-runtime
	I1222 00:22:16.863040 1440600 config.go:182] Loaded profile config "functional-973657": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1222 00:22:16.863055 1440600 machine.go:97] duration metric: took 868.721817ms to provisionDockerMachine
	I1222 00:22:16.863063 1440600 start.go:293] postStartSetup for "functional-973657" (driver="docker")
	I1222 00:22:16.863075 1440600 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1222 00:22:16.863140 1440600 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1222 00:22:16.863187 1440600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
	I1222 00:22:16.881215 1440600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38390 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/functional-973657/id_rsa Username:docker}
	I1222 00:22:16.978224 1440600 ssh_runner.go:195] Run: cat /etc/os-release
	I1222 00:22:16.981674 1440600 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1222 00:22:16.981697 1440600 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1222 00:22:16.981701 1440600 command_runner.go:130] > VERSION_ID="12"
	I1222 00:22:16.981706 1440600 command_runner.go:130] > VERSION="12 (bookworm)"
	I1222 00:22:16.981711 1440600 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1222 00:22:16.981715 1440600 command_runner.go:130] > ID=debian
	I1222 00:22:16.981720 1440600 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1222 00:22:16.981726 1440600 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1222 00:22:16.981732 1440600 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1222 00:22:16.981781 1440600 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1222 00:22:16.981805 1440600 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1222 00:22:16.981817 1440600 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1395000/.minikube/addons for local assets ...
	I1222 00:22:16.981874 1440600 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1395000/.minikube/files for local assets ...
	I1222 00:22:16.981966 1440600 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem -> 13968642.pem in /etc/ssl/certs
	I1222 00:22:16.981976 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem -> /etc/ssl/certs/13968642.pem
	I1222 00:22:16.982050 1440600 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/test/nested/copy/1396864/hosts -> hosts in /etc/test/nested/copy/1396864
	I1222 00:22:16.982058 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/test/nested/copy/1396864/hosts -> /etc/test/nested/copy/1396864/hosts
	I1222 00:22:16.982135 1440600 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1396864
	I1222 00:22:16.991499 1440600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem --> /etc/ssl/certs/13968642.pem (1708 bytes)
	I1222 00:22:17.014617 1440600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/test/nested/copy/1396864/hosts --> /etc/test/nested/copy/1396864/hosts (40 bytes)
	I1222 00:22:17.034289 1440600 start.go:296] duration metric: took 171.210875ms for postStartSetup
	I1222 00:22:17.034373 1440600 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 00:22:17.034421 1440600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
	I1222 00:22:17.055784 1440600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38390 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/functional-973657/id_rsa Username:docker}
	I1222 00:22:17.151461 1440600 command_runner.go:130] > 11%
	I1222 00:22:17.151551 1440600 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1222 00:22:17.156056 1440600 command_runner.go:130] > 174G
	I1222 00:22:17.156550 1440600 fix.go:56] duration metric: took 1.187112425s for fixHost
	I1222 00:22:17.156572 1440600 start.go:83] releasing machines lock for "functional-973657", held for 1.187162091s
	I1222 00:22:17.156642 1440600 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-973657
	I1222 00:22:17.174525 1440600 ssh_runner.go:195] Run: cat /version.json
	I1222 00:22:17.174589 1440600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
	I1222 00:22:17.174652 1440600 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1222 00:22:17.174714 1440600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
	I1222 00:22:17.196471 1440600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38390 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/functional-973657/id_rsa Username:docker}
	I1222 00:22:17.199230 1440600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38390 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/functional-973657/id_rsa Username:docker}
	I1222 00:22:17.379176 1440600 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1222 00:22:17.379235 1440600 command_runner.go:130] > {"iso_version": "v1.37.0-1765965980-22186", "kicbase_version": "v0.0.48-1766219634-22260", "minikube_version": "v1.37.0", "commit": "84997fca2a3b77f8e0b5b5ebeca663f85f924cfc"}
	I1222 00:22:17.379354 1440600 ssh_runner.go:195] Run: systemctl --version
	I1222 00:22:17.385410 1440600 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1222 00:22:17.385465 1440600 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1222 00:22:17.385880 1440600 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1222 00:22:17.390276 1440600 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1222 00:22:17.390418 1440600 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1222 00:22:17.390488 1440600 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1222 00:22:17.398542 1440600 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1222 00:22:17.398570 1440600 start.go:496] detecting cgroup driver to use...
	I1222 00:22:17.398621 1440600 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 00:22:17.398692 1440600 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1222 00:22:17.414048 1440600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1222 00:22:17.427185 1440600 docker.go:218] disabling cri-docker service (if available) ...
	I1222 00:22:17.427253 1440600 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1222 00:22:17.442685 1440600 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1222 00:22:17.455696 1440600 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1222 00:22:17.577927 1440600 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1222 00:22:17.693641 1440600 docker.go:234] disabling docker service ...
	I1222 00:22:17.693740 1440600 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1222 00:22:17.714854 1440600 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1222 00:22:17.729523 1440600 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1222 00:22:17.852439 1440600 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1222 00:22:17.963077 1440600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1222 00:22:17.977041 1440600 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 00:22:17.991276 1440600 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1222 00:22:17.992369 1440600 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1222 00:22:18.003034 1440600 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1222 00:22:18.019363 1440600 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1222 00:22:18.019441 1440600 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1222 00:22:18.030259 1440600 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1222 00:22:18.041222 1440600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1222 00:22:18.051429 1440600 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1222 00:22:18.060629 1440600 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1222 00:22:18.069455 1440600 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1222 00:22:18.079294 1440600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1222 00:22:18.088607 1440600 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1222 00:22:18.097955 1440600 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1222 00:22:18.105014 1440600 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1222 00:22:18.106002 1440600 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1222 00:22:18.114147 1440600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 00:22:18.224816 1440600 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1222 00:22:18.353040 1440600 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1222 00:22:18.353118 1440600 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1222 00:22:18.356934 1440600 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I1222 00:22:18.357009 1440600 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1222 00:22:18.357030 1440600 command_runner.go:130] > Device: 0,72	Inode: 1616        Links: 1
	I1222 00:22:18.357053 1440600 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1222 00:22:18.357086 1440600 command_runner.go:130] > Access: 2025-12-22 00:22:18.309838958 +0000
	I1222 00:22:18.357111 1440600 command_runner.go:130] > Modify: 2025-12-22 00:22:18.309838958 +0000
	I1222 00:22:18.357132 1440600 command_runner.go:130] > Change: 2025-12-22 00:22:18.309838958 +0000
	I1222 00:22:18.357178 1440600 command_runner.go:130] >  Birth: -
	I1222 00:22:18.357507 1440600 start.go:564] Will wait 60s for crictl version
	I1222 00:22:18.357612 1440600 ssh_runner.go:195] Run: which crictl
	I1222 00:22:18.361021 1440600 command_runner.go:130] > /usr/local/bin/crictl
	I1222 00:22:18.361396 1440600 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1222 00:22:18.384093 1440600 command_runner.go:130] > Version:  0.1.0
	I1222 00:22:18.384169 1440600 command_runner.go:130] > RuntimeName:  containerd
	I1222 00:22:18.384205 1440600 command_runner.go:130] > RuntimeVersion:  v2.2.1
	I1222 00:22:18.384240 1440600 command_runner.go:130] > RuntimeApiVersion:  v1
	I1222 00:22:18.386573 1440600 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.1
	RuntimeApiVersion:  v1
	I1222 00:22:18.386687 1440600 ssh_runner.go:195] Run: containerd --version
	I1222 00:22:18.407693 1440600 command_runner.go:130] > containerd containerd.io v2.2.1 dea7da592f5d1d2b7755e3a161be07f43fad8f75
	I1222 00:22:18.410154 1440600 ssh_runner.go:195] Run: containerd --version
	I1222 00:22:18.429567 1440600 command_runner.go:130] > containerd containerd.io v2.2.1 dea7da592f5d1d2b7755e3a161be07f43fad8f75
	I1222 00:22:18.437868 1440600 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.1 ...
	I1222 00:22:18.440703 1440600 cli_runner.go:164] Run: docker network inspect functional-973657 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 00:22:18.457963 1440600 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1222 00:22:18.462339 1440600 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1222 00:22:18.462457 1440600 kubeadm.go:884] updating cluster {Name:functional-973657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-973657 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1222 00:22:18.462560 1440600 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1222 00:22:18.462639 1440600 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 00:22:18.493006 1440600 command_runner.go:130] > {
	I1222 00:22:18.493026 1440600 command_runner.go:130] >   "images":  [
	I1222 00:22:18.493030 1440600 command_runner.go:130] >     {
	I1222 00:22:18.493040 1440600 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1222 00:22:18.493045 1440600 command_runner.go:130] >       "repoTags":  [
	I1222 00:22:18.493051 1440600 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1222 00:22:18.493055 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.493059 1440600 command_runner.go:130] >       "repoDigests":  [
	I1222 00:22:18.493072 1440600 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1222 00:22:18.493076 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.493081 1440600 command_runner.go:130] >       "size":  "40636774",
	I1222 00:22:18.493085 1440600 command_runner.go:130] >       "username":  "",
	I1222 00:22:18.493089 1440600 command_runner.go:130] >       "pinned":  false
	I1222 00:22:18.493092 1440600 command_runner.go:130] >     },
	I1222 00:22:18.493095 1440600 command_runner.go:130] >     {
	I1222 00:22:18.493102 1440600 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1222 00:22:18.493106 1440600 command_runner.go:130] >       "repoTags":  [
	I1222 00:22:18.493112 1440600 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1222 00:22:18.493116 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.493120 1440600 command_runner.go:130] >       "repoDigests":  [
	I1222 00:22:18.493128 1440600 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1222 00:22:18.493135 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.493139 1440600 command_runner.go:130] >       "size":  "8034419",
	I1222 00:22:18.493143 1440600 command_runner.go:130] >       "username":  "",
	I1222 00:22:18.493147 1440600 command_runner.go:130] >       "pinned":  false
	I1222 00:22:18.493150 1440600 command_runner.go:130] >     },
	I1222 00:22:18.493153 1440600 command_runner.go:130] >     {
	I1222 00:22:18.493162 1440600 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1222 00:22:18.493166 1440600 command_runner.go:130] >       "repoTags":  [
	I1222 00:22:18.493171 1440600 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1222 00:22:18.493178 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.493186 1440600 command_runner.go:130] >       "repoDigests":  [
	I1222 00:22:18.493194 1440600 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1222 00:22:18.493197 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.493201 1440600 command_runner.go:130] >       "size":  "21168808",
	I1222 00:22:18.493206 1440600 command_runner.go:130] >       "username":  "nonroot",
	I1222 00:22:18.493210 1440600 command_runner.go:130] >       "pinned":  false
	I1222 00:22:18.493213 1440600 command_runner.go:130] >     },
	I1222 00:22:18.493216 1440600 command_runner.go:130] >     {
	I1222 00:22:18.493223 1440600 command_runner.go:130] >       "id":  "sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57",
	I1222 00:22:18.493227 1440600 command_runner.go:130] >       "repoTags":  [
	I1222 00:22:18.493231 1440600 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.6-0"
	I1222 00:22:18.493235 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.493238 1440600 command_runner.go:130] >       "repoDigests":  [
	I1222 00:22:18.493246 1440600 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890"
	I1222 00:22:18.493249 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.493253 1440600 command_runner.go:130] >       "size":  "21749640",
	I1222 00:22:18.493258 1440600 command_runner.go:130] >       "uid":  {
	I1222 00:22:18.493261 1440600 command_runner.go:130] >         "value":  "0"
	I1222 00:22:18.493264 1440600 command_runner.go:130] >       },
	I1222 00:22:18.493268 1440600 command_runner.go:130] >       "username":  "",
	I1222 00:22:18.493271 1440600 command_runner.go:130] >       "pinned":  false
	I1222 00:22:18.493275 1440600 command_runner.go:130] >     },
	I1222 00:22:18.493278 1440600 command_runner.go:130] >     {
	I1222 00:22:18.493285 1440600 command_runner.go:130] >       "id":  "sha256:3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54",
	I1222 00:22:18.493289 1440600 command_runner.go:130] >       "repoTags":  [
	I1222 00:22:18.493294 1440600 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-rc.1"
	I1222 00:22:18.493297 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.493300 1440600 command_runner.go:130] >       "repoDigests":  [
	I1222 00:22:18.493308 1440600 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:58367b5c0428495c0c12411fa7a018f5d40fe57307b85d8935b1ed35706ff7ee"
	I1222 00:22:18.493311 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.493316 1440600 command_runner.go:130] >       "size":  "24692223",
	I1222 00:22:18.493319 1440600 command_runner.go:130] >       "uid":  {
	I1222 00:22:18.493335 1440600 command_runner.go:130] >         "value":  "0"
	I1222 00:22:18.493338 1440600 command_runner.go:130] >       },
	I1222 00:22:18.493342 1440600 command_runner.go:130] >       "username":  "",
	I1222 00:22:18.493346 1440600 command_runner.go:130] >       "pinned":  false
	I1222 00:22:18.493349 1440600 command_runner.go:130] >     },
	I1222 00:22:18.493352 1440600 command_runner.go:130] >     {
	I1222 00:22:18.493359 1440600 command_runner.go:130] >       "id":  "sha256:a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a",
	I1222 00:22:18.493362 1440600 command_runner.go:130] >       "repoTags":  [
	I1222 00:22:18.493368 1440600 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1"
	I1222 00:22:18.493371 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.493374 1440600 command_runner.go:130] >       "repoDigests":  [
	I1222 00:22:18.493383 1440600 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:57ab0f75f58d99f4be7bff7bdda015fcbf1b7c20e58ba2722c8c39f751dc8c98"
	I1222 00:22:18.493386 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.493389 1440600 command_runner.go:130] >       "size":  "20672157",
	I1222 00:22:18.493393 1440600 command_runner.go:130] >       "uid":  {
	I1222 00:22:18.493396 1440600 command_runner.go:130] >         "value":  "0"
	I1222 00:22:18.493399 1440600 command_runner.go:130] >       },
	I1222 00:22:18.493403 1440600 command_runner.go:130] >       "username":  "",
	I1222 00:22:18.493407 1440600 command_runner.go:130] >       "pinned":  false
	I1222 00:22:18.493410 1440600 command_runner.go:130] >     },
	I1222 00:22:18.493413 1440600 command_runner.go:130] >     {
	I1222 00:22:18.493420 1440600 command_runner.go:130] >       "id":  "sha256:7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e",
	I1222 00:22:18.493423 1440600 command_runner.go:130] >       "repoTags":  [
	I1222 00:22:18.493429 1440600 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-rc.1"
	I1222 00:22:18.493432 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.493435 1440600 command_runner.go:130] >       "repoDigests":  [
	I1222 00:22:18.493443 1440600 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bdd1fa8b53558a2e1967379a36b085c93faf15581e5fa9f212baf679d89c5bb5"
	I1222 00:22:18.493446 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.493450 1440600 command_runner.go:130] >       "size":  "22432301",
	I1222 00:22:18.493454 1440600 command_runner.go:130] >       "username":  "",
	I1222 00:22:18.493457 1440600 command_runner.go:130] >       "pinned":  false
	I1222 00:22:18.493460 1440600 command_runner.go:130] >     },
	I1222 00:22:18.493464 1440600 command_runner.go:130] >     {
	I1222 00:22:18.493475 1440600 command_runner.go:130] >       "id":  "sha256:abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde",
	I1222 00:22:18.493479 1440600 command_runner.go:130] >       "repoTags":  [
	I1222 00:22:18.493484 1440600 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-rc.1"
	I1222 00:22:18.493487 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.493491 1440600 command_runner.go:130] >       "repoDigests":  [
	I1222 00:22:18.493498 1440600 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8155e3db27c7081abfc8eb5da70820cfeaf0bba7449e45360e8220e670f417d3"
	I1222 00:22:18.493501 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.493505 1440600 command_runner.go:130] >       "size":  "15405535",
	I1222 00:22:18.493509 1440600 command_runner.go:130] >       "uid":  {
	I1222 00:22:18.493512 1440600 command_runner.go:130] >         "value":  "0"
	I1222 00:22:18.493516 1440600 command_runner.go:130] >       },
	I1222 00:22:18.493519 1440600 command_runner.go:130] >       "username":  "",
	I1222 00:22:18.493523 1440600 command_runner.go:130] >       "pinned":  false
	I1222 00:22:18.493526 1440600 command_runner.go:130] >     },
	I1222 00:22:18.493529 1440600 command_runner.go:130] >     {
	I1222 00:22:18.493536 1440600 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1222 00:22:18.493539 1440600 command_runner.go:130] >       "repoTags":  [
	I1222 00:22:18.493543 1440600 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1222 00:22:18.493547 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.493550 1440600 command_runner.go:130] >       "repoDigests":  [
	I1222 00:22:18.493557 1440600 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1222 00:22:18.493560 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.493564 1440600 command_runner.go:130] >       "size":  "267939",
	I1222 00:22:18.493568 1440600 command_runner.go:130] >       "uid":  {
	I1222 00:22:18.493571 1440600 command_runner.go:130] >         "value":  "65535"
	I1222 00:22:18.493575 1440600 command_runner.go:130] >       },
	I1222 00:22:18.493579 1440600 command_runner.go:130] >       "username":  "",
	I1222 00:22:18.493582 1440600 command_runner.go:130] >       "pinned":  true
	I1222 00:22:18.493585 1440600 command_runner.go:130] >     }
	I1222 00:22:18.493588 1440600 command_runner.go:130] >   ]
	I1222 00:22:18.493591 1440600 command_runner.go:130] > }
	I1222 00:22:18.493746 1440600 containerd.go:627] all images are preloaded for containerd runtime.
	I1222 00:22:18.493754 1440600 containerd.go:534] Images already preloaded, skipping extraction
	I1222 00:22:18.493814 1440600 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 00:22:18.517780 1440600 command_runner.go:130] > {
	I1222 00:22:18.517799 1440600 command_runner.go:130] >   "images":  [
	I1222 00:22:18.517803 1440600 command_runner.go:130] >     {
	I1222 00:22:18.517813 1440600 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1222 00:22:18.517818 1440600 command_runner.go:130] >       "repoTags":  [
	I1222 00:22:18.517824 1440600 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1222 00:22:18.517827 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.517831 1440600 command_runner.go:130] >       "repoDigests":  [
	I1222 00:22:18.517839 1440600 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1222 00:22:18.517843 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.517856 1440600 command_runner.go:130] >       "size":  "40636774",
	I1222 00:22:18.517861 1440600 command_runner.go:130] >       "username":  "",
	I1222 00:22:18.517865 1440600 command_runner.go:130] >       "pinned":  false
	I1222 00:22:18.517867 1440600 command_runner.go:130] >     },
	I1222 00:22:18.517870 1440600 command_runner.go:130] >     {
	I1222 00:22:18.517878 1440600 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1222 00:22:18.517882 1440600 command_runner.go:130] >       "repoTags":  [
	I1222 00:22:18.517887 1440600 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1222 00:22:18.517890 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.517894 1440600 command_runner.go:130] >       "repoDigests":  [
	I1222 00:22:18.517902 1440600 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1222 00:22:18.517906 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.517910 1440600 command_runner.go:130] >       "size":  "8034419",
	I1222 00:22:18.517913 1440600 command_runner.go:130] >       "username":  "",
	I1222 00:22:18.517917 1440600 command_runner.go:130] >       "pinned":  false
	I1222 00:22:18.517920 1440600 command_runner.go:130] >     },
	I1222 00:22:18.517923 1440600 command_runner.go:130] >     {
	I1222 00:22:18.517930 1440600 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1222 00:22:18.517934 1440600 command_runner.go:130] >       "repoTags":  [
	I1222 00:22:18.517939 1440600 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1222 00:22:18.517942 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.517947 1440600 command_runner.go:130] >       "repoDigests":  [
	I1222 00:22:18.517955 1440600 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1222 00:22:18.517958 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.517962 1440600 command_runner.go:130] >       "size":  "21168808",
	I1222 00:22:18.517966 1440600 command_runner.go:130] >       "username":  "nonroot",
	I1222 00:22:18.517970 1440600 command_runner.go:130] >       "pinned":  false
	I1222 00:22:18.517974 1440600 command_runner.go:130] >     },
	I1222 00:22:18.517977 1440600 command_runner.go:130] >     {
	I1222 00:22:18.517983 1440600 command_runner.go:130] >       "id":  "sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57",
	I1222 00:22:18.517987 1440600 command_runner.go:130] >       "repoTags":  [
	I1222 00:22:18.517992 1440600 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.6-0"
	I1222 00:22:18.517995 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.518002 1440600 command_runner.go:130] >       "repoDigests":  [
	I1222 00:22:18.518010 1440600 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890"
	I1222 00:22:18.518013 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.518017 1440600 command_runner.go:130] >       "size":  "21749640",
	I1222 00:22:18.518022 1440600 command_runner.go:130] >       "uid":  {
	I1222 00:22:18.518026 1440600 command_runner.go:130] >         "value":  "0"
	I1222 00:22:18.518029 1440600 command_runner.go:130] >       },
	I1222 00:22:18.518033 1440600 command_runner.go:130] >       "username":  "",
	I1222 00:22:18.518037 1440600 command_runner.go:130] >       "pinned":  false
	I1222 00:22:18.518041 1440600 command_runner.go:130] >     },
	I1222 00:22:18.518043 1440600 command_runner.go:130] >     {
	I1222 00:22:18.518050 1440600 command_runner.go:130] >       "id":  "sha256:3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54",
	I1222 00:22:18.518054 1440600 command_runner.go:130] >       "repoTags":  [
	I1222 00:22:18.518059 1440600 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-rc.1"
	I1222 00:22:18.518062 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.518066 1440600 command_runner.go:130] >       "repoDigests":  [
	I1222 00:22:18.518073 1440600 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:58367b5c0428495c0c12411fa7a018f5d40fe57307b85d8935b1ed35706ff7ee"
	I1222 00:22:18.518098 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.518103 1440600 command_runner.go:130] >       "size":  "24692223",
	I1222 00:22:18.518106 1440600 command_runner.go:130] >       "uid":  {
	I1222 00:22:18.518115 1440600 command_runner.go:130] >         "value":  "0"
	I1222 00:22:18.518118 1440600 command_runner.go:130] >       },
	I1222 00:22:18.518122 1440600 command_runner.go:130] >       "username":  "",
	I1222 00:22:18.518125 1440600 command_runner.go:130] >       "pinned":  false
	I1222 00:22:18.518128 1440600 command_runner.go:130] >     },
	I1222 00:22:18.518131 1440600 command_runner.go:130] >     {
	I1222 00:22:18.518142 1440600 command_runner.go:130] >       "id":  "sha256:a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a",
	I1222 00:22:18.518146 1440600 command_runner.go:130] >       "repoTags":  [
	I1222 00:22:18.518151 1440600 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1"
	I1222 00:22:18.518155 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.518158 1440600 command_runner.go:130] >       "repoDigests":  [
	I1222 00:22:18.518166 1440600 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:57ab0f75f58d99f4be7bff7bdda015fcbf1b7c20e58ba2722c8c39f751dc8c98"
	I1222 00:22:18.518170 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.518178 1440600 command_runner.go:130] >       "size":  "20672157",
	I1222 00:22:18.518182 1440600 command_runner.go:130] >       "uid":  {
	I1222 00:22:18.518185 1440600 command_runner.go:130] >         "value":  "0"
	I1222 00:22:18.518188 1440600 command_runner.go:130] >       },
	I1222 00:22:18.518192 1440600 command_runner.go:130] >       "username":  "",
	I1222 00:22:18.518195 1440600 command_runner.go:130] >       "pinned":  false
	I1222 00:22:18.518198 1440600 command_runner.go:130] >     },
	I1222 00:22:18.518202 1440600 command_runner.go:130] >     {
	I1222 00:22:18.518209 1440600 command_runner.go:130] >       "id":  "sha256:7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e",
	I1222 00:22:18.518212 1440600 command_runner.go:130] >       "repoTags":  [
	I1222 00:22:18.518217 1440600 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-rc.1"
	I1222 00:22:18.518220 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.518224 1440600 command_runner.go:130] >       "repoDigests":  [
	I1222 00:22:18.518231 1440600 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bdd1fa8b53558a2e1967379a36b085c93faf15581e5fa9f212baf679d89c5bb5"
	I1222 00:22:18.518235 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.518239 1440600 command_runner.go:130] >       "size":  "22432301",
	I1222 00:22:18.518242 1440600 command_runner.go:130] >       "username":  "",
	I1222 00:22:18.518246 1440600 command_runner.go:130] >       "pinned":  false
	I1222 00:22:18.518249 1440600 command_runner.go:130] >     },
	I1222 00:22:18.518253 1440600 command_runner.go:130] >     {
	I1222 00:22:18.518260 1440600 command_runner.go:130] >       "id":  "sha256:abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde",
	I1222 00:22:18.518264 1440600 command_runner.go:130] >       "repoTags":  [
	I1222 00:22:18.518269 1440600 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-rc.1"
	I1222 00:22:18.518273 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.518277 1440600 command_runner.go:130] >       "repoDigests":  [
	I1222 00:22:18.518285 1440600 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8155e3db27c7081abfc8eb5da70820cfeaf0bba7449e45360e8220e670f417d3"
	I1222 00:22:18.518288 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.518292 1440600 command_runner.go:130] >       "size":  "15405535",
	I1222 00:22:18.518295 1440600 command_runner.go:130] >       "uid":  {
	I1222 00:22:18.518299 1440600 command_runner.go:130] >         "value":  "0"
	I1222 00:22:18.518302 1440600 command_runner.go:130] >       },
	I1222 00:22:18.518306 1440600 command_runner.go:130] >       "username":  "",
	I1222 00:22:18.518310 1440600 command_runner.go:130] >       "pinned":  false
	I1222 00:22:18.518318 1440600 command_runner.go:130] >     },
	I1222 00:22:18.518322 1440600 command_runner.go:130] >     {
	I1222 00:22:18.518328 1440600 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1222 00:22:18.518332 1440600 command_runner.go:130] >       "repoTags":  [
	I1222 00:22:18.518337 1440600 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1222 00:22:18.518340 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.518344 1440600 command_runner.go:130] >       "repoDigests":  [
	I1222 00:22:18.518352 1440600 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1222 00:22:18.518355 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.518358 1440600 command_runner.go:130] >       "size":  "267939",
	I1222 00:22:18.518362 1440600 command_runner.go:130] >       "uid":  {
	I1222 00:22:18.518366 1440600 command_runner.go:130] >         "value":  "65535"
	I1222 00:22:18.518371 1440600 command_runner.go:130] >       },
	I1222 00:22:18.518375 1440600 command_runner.go:130] >       "username":  "",
	I1222 00:22:18.518379 1440600 command_runner.go:130] >       "pinned":  true
	I1222 00:22:18.518388 1440600 command_runner.go:130] >     }
	I1222 00:22:18.518391 1440600 command_runner.go:130] >   ]
	I1222 00:22:18.518397 1440600 command_runner.go:130] > }
	I1222 00:22:18.524524 1440600 containerd.go:627] all images are preloaded for containerd runtime.
	I1222 00:22:18.524599 1440600 cache_images.go:86] Images are preloaded, skipping loading
	I1222 00:22:18.524620 1440600 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 containerd true true} ...
	I1222 00:22:18.524759 1440600 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-973657 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-973657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1222 00:22:18.524857 1440600 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
	I1222 00:22:18.549454 1440600 command_runner.go:130] > {
	I1222 00:22:18.549479 1440600 command_runner.go:130] >   "cniconfig": {
	I1222 00:22:18.549486 1440600 command_runner.go:130] >     "Networks": [
	I1222 00:22:18.549489 1440600 command_runner.go:130] >       {
	I1222 00:22:18.549495 1440600 command_runner.go:130] >         "Config": {
	I1222 00:22:18.549500 1440600 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I1222 00:22:18.549519 1440600 command_runner.go:130] >           "Name": "cni-loopback",
	I1222 00:22:18.549527 1440600 command_runner.go:130] >           "Plugins": [
	I1222 00:22:18.549530 1440600 command_runner.go:130] >             {
	I1222 00:22:18.549541 1440600 command_runner.go:130] >               "Network": {
	I1222 00:22:18.549546 1440600 command_runner.go:130] >                 "ipam": {},
	I1222 00:22:18.549551 1440600 command_runner.go:130] >                 "type": "loopback"
	I1222 00:22:18.549560 1440600 command_runner.go:130] >               },
	I1222 00:22:18.549566 1440600 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I1222 00:22:18.549570 1440600 command_runner.go:130] >             }
	I1222 00:22:18.549579 1440600 command_runner.go:130] >           ],
	I1222 00:22:18.549590 1440600 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I1222 00:22:18.549604 1440600 command_runner.go:130] >         },
	I1222 00:22:18.549612 1440600 command_runner.go:130] >         "IFName": "lo"
	I1222 00:22:18.549615 1440600 command_runner.go:130] >       }
	I1222 00:22:18.549619 1440600 command_runner.go:130] >     ],
	I1222 00:22:18.549626 1440600 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I1222 00:22:18.549636 1440600 command_runner.go:130] >     "PluginDirs": [
	I1222 00:22:18.549640 1440600 command_runner.go:130] >       "/opt/cni/bin"
	I1222 00:22:18.549643 1440600 command_runner.go:130] >     ],
	I1222 00:22:18.549648 1440600 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I1222 00:22:18.549656 1440600 command_runner.go:130] >     "Prefix": "eth"
	I1222 00:22:18.549667 1440600 command_runner.go:130] >   },
	I1222 00:22:18.549674 1440600 command_runner.go:130] >   "config": {
	I1222 00:22:18.549678 1440600 command_runner.go:130] >     "cdiSpecDirs": [
	I1222 00:22:18.549682 1440600 command_runner.go:130] >       "/etc/cdi",
	I1222 00:22:18.549687 1440600 command_runner.go:130] >       "/var/run/cdi"
	I1222 00:22:18.549691 1440600 command_runner.go:130] >     ],
	I1222 00:22:18.549695 1440600 command_runner.go:130] >     "cni": {
	I1222 00:22:18.549698 1440600 command_runner.go:130] >       "binDir": "",
	I1222 00:22:18.549702 1440600 command_runner.go:130] >       "binDirs": [
	I1222 00:22:18.549706 1440600 command_runner.go:130] >         "/opt/cni/bin"
	I1222 00:22:18.549709 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.549713 1440600 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I1222 00:22:18.549717 1440600 command_runner.go:130] >       "confTemplate": "",
	I1222 00:22:18.549720 1440600 command_runner.go:130] >       "ipPref": "",
	I1222 00:22:18.549728 1440600 command_runner.go:130] >       "maxConfNum": 1,
	I1222 00:22:18.549732 1440600 command_runner.go:130] >       "setupSerially": false,
	I1222 00:22:18.549739 1440600 command_runner.go:130] >       "useInternalLoopback": false
	I1222 00:22:18.549748 1440600 command_runner.go:130] >     },
	I1222 00:22:18.549754 1440600 command_runner.go:130] >     "containerd": {
	I1222 00:22:18.549759 1440600 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I1222 00:22:18.549768 1440600 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I1222 00:22:18.549773 1440600 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false,
	I1222 00:22:18.549777 1440600 command_runner.go:130] >       "runtimes": {
	I1222 00:22:18.549781 1440600 command_runner.go:130] >         "runc": {
	I1222 00:22:18.549786 1440600 command_runner.go:130] >           "ContainerAnnotations": null,
	I1222 00:22:18.549795 1440600 command_runner.go:130] >           "PodAnnotations": null,
	I1222 00:22:18.549799 1440600 command_runner.go:130] >           "baseRuntimeSpec": "",
	I1222 00:22:18.549803 1440600 command_runner.go:130] >           "cgroupWritable": false,
	I1222 00:22:18.549808 1440600 command_runner.go:130] >           "cniConfDir": "",
	I1222 00:22:18.549816 1440600 command_runner.go:130] >           "cniMaxConfNum": 0,
	I1222 00:22:18.549825 1440600 command_runner.go:130] >           "io_type": "",
	I1222 00:22:18.549829 1440600 command_runner.go:130] >           "options": {
	I1222 00:22:18.549834 1440600 command_runner.go:130] >             "BinaryName": "",
	I1222 00:22:18.549841 1440600 command_runner.go:130] >             "CriuImagePath": "",
	I1222 00:22:18.549847 1440600 command_runner.go:130] >             "CriuWorkPath": "",
	I1222 00:22:18.549851 1440600 command_runner.go:130] >             "IoGid": 0,
	I1222 00:22:18.549860 1440600 command_runner.go:130] >             "IoUid": 0,
	I1222 00:22:18.549864 1440600 command_runner.go:130] >             "NoNewKeyring": false,
	I1222 00:22:18.549869 1440600 command_runner.go:130] >             "Root": "",
	I1222 00:22:18.549874 1440600 command_runner.go:130] >             "ShimCgroup": "",
	I1222 00:22:18.549883 1440600 command_runner.go:130] >             "SystemdCgroup": false
	I1222 00:22:18.549890 1440600 command_runner.go:130] >           },
	I1222 00:22:18.549896 1440600 command_runner.go:130] >           "privileged_without_host_devices": false,
	I1222 00:22:18.549907 1440600 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I1222 00:22:18.549911 1440600 command_runner.go:130] >           "runtimePath": "",
	I1222 00:22:18.549916 1440600 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I1222 00:22:18.549920 1440600 command_runner.go:130] >           "sandboxer": "podsandbox",
	I1222 00:22:18.549924 1440600 command_runner.go:130] >           "snapshotter": ""
	I1222 00:22:18.549928 1440600 command_runner.go:130] >         }
	I1222 00:22:18.549931 1440600 command_runner.go:130] >       }
	I1222 00:22:18.549934 1440600 command_runner.go:130] >     },
	I1222 00:22:18.549944 1440600 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I1222 00:22:18.549953 1440600 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I1222 00:22:18.549961 1440600 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I1222 00:22:18.549965 1440600 command_runner.go:130] >     "disableApparmor": false,
	I1222 00:22:18.549970 1440600 command_runner.go:130] >     "disableHugetlbController": true,
	I1222 00:22:18.549978 1440600 command_runner.go:130] >     "disableProcMount": false,
	I1222 00:22:18.549983 1440600 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I1222 00:22:18.549987 1440600 command_runner.go:130] >     "enableCDI": true,
	I1222 00:22:18.549991 1440600 command_runner.go:130] >     "enableSelinux": false,
	I1222 00:22:18.549996 1440600 command_runner.go:130] >     "enableUnprivilegedICMP": true,
	I1222 00:22:18.550004 1440600 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I1222 00:22:18.550010 1440600 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I1222 00:22:18.550015 1440600 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I1222 00:22:18.550019 1440600 command_runner.go:130] >     "maxContainerLogLineSize": 16384,
	I1222 00:22:18.550024 1440600 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I1222 00:22:18.550035 1440600 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I1222 00:22:18.550046 1440600 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I1222 00:22:18.550051 1440600 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I1222 00:22:18.550059 1440600 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri",
	I1222 00:22:18.550068 1440600 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I1222 00:22:18.550072 1440600 command_runner.go:130] >     "unsetSeccompProfile": ""
	I1222 00:22:18.550165 1440600 command_runner.go:130] >   },
	I1222 00:22:18.550176 1440600 command_runner.go:130] >   "features": {
	I1222 00:22:18.550180 1440600 command_runner.go:130] >     "supplemental_groups_policy": true
	I1222 00:22:18.550184 1440600 command_runner.go:130] >   },
	I1222 00:22:18.550188 1440600 command_runner.go:130] >   "golang": "go1.24.11",
	I1222 00:22:18.550201 1440600 command_runner.go:130] >   "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1222 00:22:18.550222 1440600 command_runner.go:130] >   "lastCNILoadStatus.default": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1222 00:22:18.550231 1440600 command_runner.go:130] >   "runtimeHandlers": [
	I1222 00:22:18.550234 1440600 command_runner.go:130] >     {
	I1222 00:22:18.550238 1440600 command_runner.go:130] >       "features": {
	I1222 00:22:18.550243 1440600 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1222 00:22:18.550253 1440600 command_runner.go:130] >         "user_namespaces": true
	I1222 00:22:18.550257 1440600 command_runner.go:130] >       }
	I1222 00:22:18.550260 1440600 command_runner.go:130] >     },
	I1222 00:22:18.550264 1440600 command_runner.go:130] >     {
	I1222 00:22:18.550268 1440600 command_runner.go:130] >       "features": {
	I1222 00:22:18.550272 1440600 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1222 00:22:18.550277 1440600 command_runner.go:130] >         "user_namespaces": true
	I1222 00:22:18.550282 1440600 command_runner.go:130] >       },
	I1222 00:22:18.550286 1440600 command_runner.go:130] >       "name": "runc"
	I1222 00:22:18.550290 1440600 command_runner.go:130] >     }
	I1222 00:22:18.550293 1440600 command_runner.go:130] >   ],
	I1222 00:22:18.550296 1440600 command_runner.go:130] >   "status": {
	I1222 00:22:18.550302 1440600 command_runner.go:130] >     "conditions": [
	I1222 00:22:18.550305 1440600 command_runner.go:130] >       {
	I1222 00:22:18.550315 1440600 command_runner.go:130] >         "message": "",
	I1222 00:22:18.550319 1440600 command_runner.go:130] >         "reason": "",
	I1222 00:22:18.550327 1440600 command_runner.go:130] >         "status": true,
	I1222 00:22:18.550337 1440600 command_runner.go:130] >         "type": "RuntimeReady"
	I1222 00:22:18.550341 1440600 command_runner.go:130] >       },
	I1222 00:22:18.550344 1440600 command_runner.go:130] >       {
	I1222 00:22:18.550352 1440600 command_runner.go:130] >         "message": "Network plugin returns error: cni plugin not initialized",
	I1222 00:22:18.550360 1440600 command_runner.go:130] >         "reason": "NetworkPluginNotReady",
	I1222 00:22:18.550365 1440600 command_runner.go:130] >         "status": false,
	I1222 00:22:18.550369 1440600 command_runner.go:130] >         "type": "NetworkReady"
	I1222 00:22:18.550373 1440600 command_runner.go:130] >       },
	I1222 00:22:18.550375 1440600 command_runner.go:130] >       {
	I1222 00:22:18.550400 1440600 command_runner.go:130] >         "message": "{\"io.containerd.deprecation/cgroup-v1\":\"The support for cgroup v1 is deprecated since containerd v2.2 and will be removed by no later than May 2029. Upgrade the host to use cgroup v2.\"}",
	I1222 00:22:18.550411 1440600 command_runner.go:130] >         "reason": "ContainerdHasDeprecationWarnings",
	I1222 00:22:18.550417 1440600 command_runner.go:130] >         "status": false,
	I1222 00:22:18.550423 1440600 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings"
	I1222 00:22:18.550427 1440600 command_runner.go:130] >       }
	I1222 00:22:18.550430 1440600 command_runner.go:130] >     ]
	I1222 00:22:18.550433 1440600 command_runner.go:130] >   }
	I1222 00:22:18.550437 1440600 command_runner.go:130] > }
	I1222 00:22:18.553215 1440600 cni.go:84] Creating CNI manager for ""
	I1222 00:22:18.553243 1440600 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1222 00:22:18.553264 1440600 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1222 00:22:18.553287 1440600 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-973657 NodeName:functional-973657 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1222 00:22:18.553412 1440600 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-973657"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1222 00:22:18.553487 1440600 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1222 00:22:18.562348 1440600 command_runner.go:130] > kubeadm
	I1222 00:22:18.562392 1440600 command_runner.go:130] > kubectl
	I1222 00:22:18.562397 1440600 command_runner.go:130] > kubelet
	I1222 00:22:18.563648 1440600 binaries.go:51] Found k8s binaries, skipping transfer
	I1222 00:22:18.563729 1440600 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1222 00:22:18.571505 1440600 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1222 00:22:18.584676 1440600 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1222 00:22:18.597236 1440600 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1222 00:22:18.610841 1440600 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1222 00:22:18.614244 1440600 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1222 00:22:18.614541 1440600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 00:22:18.726610 1440600 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 00:22:19.239399 1440600 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657 for IP: 192.168.49.2
	I1222 00:22:19.239420 1440600 certs.go:195] generating shared ca certs ...
	I1222 00:22:19.239437 1440600 certs.go:227] acquiring lock for ca certs: {Name:mk4e1172e73c8d9b926824a39d7e920772302ed7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:22:19.239601 1440600 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.key
	I1222 00:22:19.239659 1440600 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.key
	I1222 00:22:19.239667 1440600 certs.go:257] generating profile certs ...
	I1222 00:22:19.239794 1440600 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/client.key
	I1222 00:22:19.239853 1440600 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/apiserver.key.ec70d081
	I1222 00:22:19.239904 1440600 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/proxy-client.key
	I1222 00:22:19.239913 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1222 00:22:19.239940 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1222 00:22:19.239954 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1222 00:22:19.239964 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1222 00:22:19.239974 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1222 00:22:19.239986 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1222 00:22:19.239996 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1222 00:22:19.240015 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1222 00:22:19.240069 1440600 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/1396864.pem (1338 bytes)
	W1222 00:22:19.240100 1440600 certs.go:480] ignoring /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/1396864_empty.pem, impossibly tiny 0 bytes
	I1222 00:22:19.240108 1440600 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca-key.pem (1675 bytes)
	I1222 00:22:19.240138 1440600 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem (1082 bytes)
	I1222 00:22:19.240165 1440600 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem (1123 bytes)
	I1222 00:22:19.240227 1440600 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/key.pem (1679 bytes)
	I1222 00:22:19.240279 1440600 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem (1708 bytes)
	I1222 00:22:19.240316 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem -> /usr/share/ca-certificates/13968642.pem
	I1222 00:22:19.240338 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:22:19.240354 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/1396864.pem -> /usr/share/ca-certificates/1396864.pem
	I1222 00:22:19.240935 1440600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1222 00:22:19.264800 1440600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1222 00:22:19.285797 1440600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1222 00:22:19.306670 1440600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1222 00:22:19.326432 1440600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1222 00:22:19.345177 1440600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1222 00:22:19.365354 1440600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1222 00:22:19.385285 1440600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1222 00:22:19.406674 1440600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem --> /usr/share/ca-certificates/13968642.pem (1708 bytes)
	I1222 00:22:19.425094 1440600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1222 00:22:19.443464 1440600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/1396864.pem --> /usr/share/ca-certificates/1396864.pem (1338 bytes)
	I1222 00:22:19.461417 1440600 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1222 00:22:19.474356 1440600 ssh_runner.go:195] Run: openssl version
	I1222 00:22:19.480426 1440600 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1222 00:22:19.480764 1440600 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/13968642.pem
	I1222 00:22:19.488508 1440600 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/13968642.pem /etc/ssl/certs/13968642.pem
	I1222 00:22:19.496491 1440600 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13968642.pem
	I1222 00:22:19.500580 1440600 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 22 00:13 /usr/share/ca-certificates/13968642.pem
	I1222 00:22:19.500632 1440600 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 00:13 /usr/share/ca-certificates/13968642.pem
	I1222 00:22:19.500692 1440600 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13968642.pem
	I1222 00:22:19.542795 1440600 command_runner.go:130] > 3ec20f2e
	I1222 00:22:19.543311 1440600 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1222 00:22:19.550778 1440600 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:22:19.558196 1440600 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1222 00:22:19.566111 1440600 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:22:19.570217 1440600 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 22 00:04 /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:22:19.570294 1440600 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 00:04 /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:22:19.570384 1440600 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:22:19.611673 1440600 command_runner.go:130] > b5213941
	I1222 00:22:19.612225 1440600 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1222 00:22:19.620704 1440600 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1396864.pem
	I1222 00:22:19.628264 1440600 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1396864.pem /etc/ssl/certs/1396864.pem
	I1222 00:22:19.635997 1440600 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1396864.pem
	I1222 00:22:19.639846 1440600 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 22 00:13 /usr/share/ca-certificates/1396864.pem
	I1222 00:22:19.640210 1440600 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 00:13 /usr/share/ca-certificates/1396864.pem
	I1222 00:22:19.640329 1440600 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1396864.pem
	I1222 00:22:19.681144 1440600 command_runner.go:130] > 51391683
	I1222 00:22:19.681670 1440600 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1222 00:22:19.689290 1440600 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 00:22:19.693035 1440600 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 00:22:19.693063 1440600 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1222 00:22:19.693070 1440600 command_runner.go:130] > Device: 259,1	Inode: 3898609     Links: 1
	I1222 00:22:19.693078 1440600 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1222 00:22:19.693115 1440600 command_runner.go:130] > Access: 2025-12-22 00:18:12.483760857 +0000
	I1222 00:22:19.693127 1440600 command_runner.go:130] > Modify: 2025-12-22 00:14:07.469855514 +0000
	I1222 00:22:19.693132 1440600 command_runner.go:130] > Change: 2025-12-22 00:14:07.469855514 +0000
	I1222 00:22:19.693137 1440600 command_runner.go:130] >  Birth: 2025-12-22 00:14:07.469855514 +0000
	I1222 00:22:19.693272 1440600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1222 00:22:19.733914 1440600 command_runner.go:130] > Certificate will not expire
	I1222 00:22:19.734424 1440600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1222 00:22:19.775247 1440600 command_runner.go:130] > Certificate will not expire
	I1222 00:22:19.775751 1440600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1222 00:22:19.816615 1440600 command_runner.go:130] > Certificate will not expire
	I1222 00:22:19.817124 1440600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1222 00:22:19.858237 1440600 command_runner.go:130] > Certificate will not expire
	I1222 00:22:19.858742 1440600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1222 00:22:19.899966 1440600 command_runner.go:130] > Certificate will not expire
	I1222 00:22:19.900073 1440600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1222 00:22:19.941050 1440600 command_runner.go:130] > Certificate will not expire
	I1222 00:22:19.941558 1440600 kubeadm.go:401] StartCluster: {Name:functional-973657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-973657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 00:22:19.941671 1440600 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1222 00:22:19.941755 1440600 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 00:22:19.969312 1440600 cri.go:96] found id: ""
	I1222 00:22:19.969401 1440600 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1222 00:22:19.976791 1440600 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1222 00:22:19.976817 1440600 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1222 00:22:19.976825 1440600 command_runner.go:130] > /var/lib/minikube/etcd:
	I1222 00:22:19.977852 1440600 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1222 00:22:19.977869 1440600 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1222 00:22:19.977970 1440600 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1222 00:22:19.987953 1440600 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1222 00:22:19.988422 1440600 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-973657" does not appear in /home/jenkins/minikube-integration/22179-1395000/kubeconfig
	I1222 00:22:19.988584 1440600 kubeconfig.go:62] /home/jenkins/minikube-integration/22179-1395000/kubeconfig needs updating (will repair): [kubeconfig missing "functional-973657" cluster setting kubeconfig missing "functional-973657" context setting]
	I1222 00:22:19.988906 1440600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/kubeconfig: {Name:mk08a9ce05fb3d2aec66c2d4776a38c1f64c42df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:22:19.989373 1440600 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22179-1395000/kubeconfig
	I1222 00:22:19.989570 1440600 kapi.go:59] client config for functional-973657: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/client.crt", KeyFile:"/home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/client.key", CAFile:"/home/jenkins/minikube-integration/22179-1395000/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2001100), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1222 00:22:19.990226 1440600 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1222 00:22:19.990386 1440600 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1222 00:22:19.990501 1440600 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1222 00:22:19.990531 1440600 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1222 00:22:19.990563 1440600 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1222 00:22:19.990584 1440600 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1222 00:22:19.990915 1440600 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1222 00:22:19.999837 1440600 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1222 00:22:19.999916 1440600 kubeadm.go:602] duration metric: took 22.040118ms to restartPrimaryControlPlane
	I1222 00:22:19.999943 1440600 kubeadm.go:403] duration metric: took 58.401328ms to StartCluster
	I1222 00:22:19.999973 1440600 settings.go:142] acquiring lock: {Name:mk10fbc51e5384d725fd46ab36048358281cb87b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:22:20.000060 1440600 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22179-1395000/kubeconfig
	I1222 00:22:20.000818 1440600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/kubeconfig: {Name:mk08a9ce05fb3d2aec66c2d4776a38c1f64c42df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:22:20.001160 1440600 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1222 00:22:20.001573 1440600 config.go:182] Loaded profile config "functional-973657": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1222 00:22:20.001632 1440600 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1222 00:22:20.001706 1440600 addons.go:70] Setting storage-provisioner=true in profile "functional-973657"
	I1222 00:22:20.001719 1440600 addons.go:239] Setting addon storage-provisioner=true in "functional-973657"
	I1222 00:22:20.001742 1440600 host.go:66] Checking if "functional-973657" exists ...
	I1222 00:22:20.002272 1440600 cli_runner.go:164] Run: docker container inspect functional-973657 --format={{.State.Status}}
	I1222 00:22:20.005335 1440600 addons.go:70] Setting default-storageclass=true in profile "functional-973657"
	I1222 00:22:20.005371 1440600 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-973657"
	I1222 00:22:20.005777 1440600 cli_runner.go:164] Run: docker container inspect functional-973657 --format={{.State.Status}}
	I1222 00:22:20.009418 1440600 out.go:179] * Verifying Kubernetes components...
	I1222 00:22:20.018228 1440600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 00:22:20.049014 1440600 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1222 00:22:20.054188 1440600 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:22:20.054214 1440600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1222 00:22:20.054285 1440600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
	I1222 00:22:20.057022 1440600 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22179-1395000/kubeconfig
	I1222 00:22:20.057199 1440600 kapi.go:59] client config for functional-973657: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/client.crt", KeyFile:"/home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/client.key", CAFile:"/home/jenkins/minikube-integration/22179-1395000/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2001100), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1222 00:22:20.057484 1440600 addons.go:239] Setting addon default-storageclass=true in "functional-973657"
	I1222 00:22:20.057515 1440600 host.go:66] Checking if "functional-973657" exists ...
	I1222 00:22:20.057932 1440600 cli_runner.go:164] Run: docker container inspect functional-973657 --format={{.State.Status}}
	I1222 00:22:20.116105 1440600 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1222 00:22:20.116126 1440600 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1222 00:22:20.116211 1440600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
	I1222 00:22:20.118476 1440600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38390 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/functional-973657/id_rsa Username:docker}
	I1222 00:22:20.150964 1440600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38390 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/functional-973657/id_rsa Username:docker}
	I1222 00:22:20.230950 1440600 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 00:22:20.246813 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:22:20.269038 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:22:20.989713 1440600 node_ready.go:35] waiting up to 6m0s for node "functional-973657" to be "Ready" ...
	I1222 00:22:20.989868 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:20.989910 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:20.989956 1440600 retry.go:84] will retry after 300ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:20.990019 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:20.990037 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:20.990158 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:20.990237 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:20.990539 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:21.220129 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:22:21.281805 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:21.285548 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:21.328766 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:22:21.389895 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:21.389951 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:21.490214 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:21.490305 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:21.490671 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:21.747162 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:22:21.762982 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:22:21.851794 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:21.851892 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:21.874934 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:21.874990 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:21.990352 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:21.990483 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:21.990846 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:22.169304 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:22:22.227981 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:22.231304 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 00:22:22.232314 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:22.293066 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:22.293113 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:22.490400 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:22.490500 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:22.490834 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:22.906334 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:22:22.975672 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:22.975713 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:22.990847 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:22.990933 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:22.991200 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:22:22.991243 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:22:23.106669 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:22:23.165342 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:23.165389 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:23.490828 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:23.490919 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:23.491242 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:23.690784 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:22:23.756600 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:23.760540 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:23.989979 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:23.990075 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:23.990401 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:24.489993 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:24.490100 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:24.490454 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:24.698734 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:22:24.769684 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:24.773516 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:24.989931 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:24.990002 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:24.990372 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:22:24.991642 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:22:25.485320 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:22:25.490209 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:25.490301 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:25.490614 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:25.576354 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:25.576402 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:25.989992 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:25.990101 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:25.990409 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:26.023839 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:22:26.088004 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:26.088050 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:26.490597 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:26.490669 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:26.491019 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:26.990635 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:26.990716 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:26.991074 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:27.490758 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:27.490828 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:27.491160 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:22:27.491213 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:22:27.990564 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:27.990642 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:27.991013 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:28.490658 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:28.490747 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:28.491022 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:28.831344 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:22:28.887027 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:28.890561 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:28.990850 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:28.990934 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:28.991236 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:29.310761 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:22:29.372391 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:29.372466 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:29.490719 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:29.490793 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:29.491132 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:29.989857 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:29.989931 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:29.990237 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:22:29.990280 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:22:30.489971 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:30.490047 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:30.490418 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:30.990341 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:30.990414 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:30.990750 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:31.490503 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:31.490609 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:31.490891 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:31.990673 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:31.990771 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:31.991094 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:22:31.991143 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:22:32.490784 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:32.490857 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:32.491147 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:32.990889 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:32.990957 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:32.991275 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:33.490908 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:33.490983 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:33.491308 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:33.989973 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:33.990048 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:33.990429 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:34.489922 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:34.490003 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:34.490315 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:22:34.490363 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:22:34.729902 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:22:34.785155 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:34.788865 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:34.788903 1440600 retry.go:84] will retry after 5s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:34.990007 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:34.990103 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:34.990461 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:35.490036 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:35.490130 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:35.490475 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:35.603941 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:22:35.664634 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:35.664674 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:35.990278 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:35.990353 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:35.990620 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:36.489989 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:36.490117 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:36.490457 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:22:36.490508 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:22:36.990183 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:36.990309 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:36.990632 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:37.490175 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:37.490265 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:37.490582 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:37.990291 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:37.990369 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:37.990755 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:38.490593 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:38.490669 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:38.491023 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:22:38.491077 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:22:38.990843 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:38.990915 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:38.991302 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:39.489984 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:39.490057 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:39.490378 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:39.827913 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:22:39.886956 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:39.887007 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:39.887042 1440600 retry.go:84] will retry after 5.9s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:39.990214 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:39.990290 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:39.990608 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:40.490186 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:40.490272 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:40.490611 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:40.989992 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:40.990105 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:40.990426 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:22:40.990478 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:22:41.489998 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:41.490103 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:41.490430 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:41.990002 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:41.990092 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:41.990399 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:42.490105 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:42.490198 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:42.490519 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:42.989979 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:42.990061 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:42.990415 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:43.490003 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:43.490101 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:43.490417 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:22:43.490468 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:22:43.989976 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:43.990053 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:43.990428 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:44.490009 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:44.490103 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:44.490462 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:44.689826 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:22:44.747699 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:44.751320 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:44.751358 1440600 retry.go:84] will retry after 11.8s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:44.990747 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:44.990828 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:44.991101 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:45.490873 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:45.491004 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:45.491354 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:22:45.491411 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:22:45.742662 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:22:45.802582 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:45.802622 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:45.990273 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:45.990345 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:45.990615 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:46.489977 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:46.490053 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:46.490376 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:46.990196 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:46.990269 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:46.990588 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:47.490213 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:47.490295 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:47.490626 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:47.990032 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:47.990136 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:47.990574 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:22:47.990636 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:22:48.490291 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:48.490369 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:48.490704 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:48.990450 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:48.990547 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:48.990893 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:49.490743 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:49.490839 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:49.491164 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:49.989920 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:49.990005 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:49.990408 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:50.490031 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:50.490126 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:50.490389 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:22:50.490443 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:22:50.990489 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:50.990566 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:50.990936 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:51.490631 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:51.490764 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:51.491053 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:51.989876 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:51.989962 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:51.990268 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:52.489968 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:52.490042 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:52.490371 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:52.990110 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:52.990196 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:52.990515 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:22:52.990572 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:22:53.490196 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:53.490268 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:53.490563 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:53.990000 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:53.990092 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:53.990415 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:54.490001 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:54.490098 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:54.490420 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:54.989928 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:54.990024 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:54.990327 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:55.234826 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:22:55.295545 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:55.295592 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:55.295617 1440600 retry.go:84] will retry after 23.4s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:55.490907 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:55.490991 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:55.491326 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:22:55.491397 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:22:55.990270 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:55.990351 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:55.990713 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:56.490418 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:56.490484 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:56.490747 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:56.590202 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:22:56.649053 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:56.649099 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:56.990592 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:56.990671 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:56.991011 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:57.490852 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:57.490928 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:57.491243 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:57.989908 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:57.990004 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:57.990332 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:22:57.990391 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:22:58.489983 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:58.490098 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:58.490492 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:58.990013 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:58.990106 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:58.990483 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:59.490202 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:59.490276 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:59.490574 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:59.989956 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:59.990059 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:59.990371 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:22:59.990417 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:00.489992 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:00.490099 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:00.490433 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:00.990353 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:00.990422 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:00.990690 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:01.490534 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:01.490615 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:01.490968 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:01.990810 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:01.990893 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:01.991247 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:01.991307 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:02.489934 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:02.490020 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:02.490365 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:02.989981 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:02.990056 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:02.990424 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:03.490057 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:03.490160 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:03.490510 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:03.990186 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:03.990261 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:03.990567 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:04.489961 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:04.490056 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:04.490388 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:04.490445 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:04.989990 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:04.990063 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:04.990444 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:05.490618 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:05.490690 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:05.491034 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:05.990715 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:05.990804 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:05.991174 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:06.490848 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:06.490927 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:06.491264 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:06.491323 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:06.989959 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:06.990038 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:06.990349 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:07.490001 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:07.490094 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:07.490420 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:07.990138 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:07.990214 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:07.990557 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:08.490207 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:08.490297 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:08.490641 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:08.990354 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:08.990451 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:08.990812 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:08.990863 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:09.490648 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:09.490727 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:09.491063 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:09.990673 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:09.990741 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:09.991016 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:10.490839 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:10.490917 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:10.491255 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:10.990113 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:10.990187 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:10.990542 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:11.490194 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:11.490275 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:11.490617 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:11.490670 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:11.989985 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:11.990065 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:11.990445 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:12.490175 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:12.490250 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:12.490589 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:12.990181 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:12.990252 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:12.990513 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:13.489972 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:13.490070 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:13.490414 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:13.989959 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:13.990036 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:13.990393 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:13.990452 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:14.490128 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:14.490203 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:14.490524 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:14.989972 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:14.990055 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:14.990413 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:15.490003 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:15.490098 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:15.490439 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:15.990388 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:15.990464 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:15.990804 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:15.990869 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:16.088253 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:23:16.150952 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:23:16.151001 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:23:16.151025 1440600 retry.go:84] will retry after 12s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:23:16.490464 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:16.490538 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:16.490881 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:16.990721 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:16.990797 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:16.991127 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:17.489899 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:17.489969 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:17.490299 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:17.990075 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:17.990174 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:17.990513 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:18.489977 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:18.490103 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:18.490446 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:18.490500 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:18.654775 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:23:18.717545 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:23:18.717590 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:23:18.989890 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:18.989961 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:18.990331 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:19.490056 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:19.490166 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:19.490474 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:19.989986 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:19.990059 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:19.990413 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:20.489971 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:20.490043 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:20.490401 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:20.990442 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:20.990522 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:20.990905 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:20.990965 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:21.490561 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:21.490647 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:21.491034 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:21.990814 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:21.990880 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:21.991151 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:22.489859 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:22.489933 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:22.490316 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:22.989907 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:22.990010 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:22.990373 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:23.489920 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:23.489988 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:23.490275 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:23.490318 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:23.990022 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:23.990126 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:23.990454 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:24.489985 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:24.490058 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:24.490398 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:24.990113 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:24.990189 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:24.990527 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:25.489940 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:25.490018 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:25.490368 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:25.490426 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:25.989959 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:25.990037 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:25.990391 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:26.489963 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:26.490033 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:26.490358 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:26.989983 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:26.990062 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:26.990471 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:27.490073 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:27.490176 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:27.490476 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:27.490523 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:27.990208 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:27.990284 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:27.990581 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:28.161122 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:23:28.220514 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:23:28.224335 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:23:28.224393 1440600 retry.go:84] will retry after 41.4s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:23:28.490932 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:28.491004 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:28.491336 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:28.989906 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:28.989984 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:28.990321 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:29.490024 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:29.490113 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:29.490458 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:29.989986 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:29.990094 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:29.990474 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:29.990557 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:30.490047 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:30.490143 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:30.490458 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:30.990259 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:30.990329 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:30.990655 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:31.490002 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:31.490100 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:31.490411 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:31.990014 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:31.990115 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:31.990469 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:32.490018 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:32.490108 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:32.490375 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:32.490418 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:32.990098 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:32.990177 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:32.990500 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:33.490003 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:33.490108 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:33.490466 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:33.990154 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:33.990234 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:33.990566 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:34.490002 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:34.490090 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:34.490440 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:34.490501 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:34.990013 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:34.990107 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:34.990397 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:35.490066 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:35.490157 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:35.490467 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:35.990431 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:35.990505 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:35.990824 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:36.490453 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:36.490528 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:36.490835 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:36.490884 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:36.990603 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:36.990678 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:36.990954 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:37.490735 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:37.490807 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:37.491181 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:37.990857 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:37.990929 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:37.991230 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:38.489948 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:38.490019 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:38.490349 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:38.990008 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:38.990105 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:38.990436 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:38.990495 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:39.489987 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:39.490063 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:39.490393 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:39.989944 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:39.990040 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:39.990363 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:40.489956 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:40.490045 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:40.490412 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:40.990475 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:40.990549 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:40.990889 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:40.990950 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:41.490672 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:41.490741 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:41.491008 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:41.990780 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:41.990856 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:41.991209 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:42.490871 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:42.490954 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:42.491340 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:42.990007 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:42.990097 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:42.990404 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:43.489956 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:43.490032 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:43.490391 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:43.490443 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:43.989997 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:43.990069 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:43.990424 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:44.490113 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:44.490184 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:44.490504 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:44.989987 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:44.990072 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:44.990468 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:45.490042 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:45.490139 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:45.490488 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:45.490544 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:45.990486 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:45.990563 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:45.990841 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:46.490648 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:46.490719 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:46.491036 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:46.990855 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:46.990935 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:46.991321 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:47.489981 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:47.490047 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:47.490334 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:47.990019 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:47.990115 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:47.990452 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:47.990507 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:48.490178 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:48.490257 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:48.490596 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:48.990176 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:48.990258 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:48.990544 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:49.490811 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:49.490901 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:49.491279 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:49.989874 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:49.989946 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:49.990300 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:50.489924 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:50.490000 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:50.490343 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:50.490397 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:50.990356 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:50.990437 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:50.990752 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:51.490553 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:51.490629 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:51.490975 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:51.990784 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:51.990866 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:51.991140 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:52.489871 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:52.489953 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:52.490301 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:52.989984 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:52.990058 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:52.990419 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:52.990479 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:53.490129 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:53.490202 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:53.490518 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:53.990187 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:53.990262 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:53.990609 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:54.489980 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:54.490055 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:54.490439 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:54.989995 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:54.990072 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:54.990372 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:55.490063 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:55.490153 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:55.490516 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:55.490572 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:55.990452 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:55.990534 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:55.990878 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:56.490649 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:56.490717 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:56.490982 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:56.990754 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:56.990838 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:56.991192 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:57.489902 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:57.489983 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:57.490337 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:57.990010 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:57.990101 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:57.990401 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:57.990458 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:58.490135 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:58.490219 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:58.490572 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:58.990291 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:58.990363 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:58.990713 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:59.490474 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:59.490546 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:59.490809 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:59.990637 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:59.990713 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:59.991064 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:59.991122 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:00.490316 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:00.490400 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:00.490862 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:00.990668 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:00.990739 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:00.991087 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:01.490883 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:01.490958 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:01.491270 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:01.989948 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:01.990026 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:01.990325 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:02.490014 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:02.490103 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:02.490477 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:02.490534 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:02.990031 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:02.990131 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:02.990497 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:03.490184 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:03.490262 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:03.490600 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:03.989956 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:03.990032 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:03.990406 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:04.489994 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:04.490072 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:04.490456 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:04.989918 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:04.989996 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:04.990341 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:04.990396 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:05.490050 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:05.490143 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:05.490511 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:05.990488 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:05.990562 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:05.990914 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:06.490753 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:06.490832 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:06.491115 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:06.560484 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:24:06.618784 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:24:06.622419 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:24:06.622526 1440600 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1222 00:24:06.989983 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:06.990064 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:06.990383 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:06.990433 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:07.490157 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:07.490233 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:07.490524 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:07.990185 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:07.990270 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:07.990599 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:08.490022 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:08.490115 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:08.490453 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:08.989981 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:08.990060 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:08.990411 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:08.990466 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:09.490075 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:09.490168 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:09.490514 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:09.609944 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:24:09.674734 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:24:09.674775 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:24:09.674856 1440600 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1222 00:24:09.678207 1440600 out.go:179] * Enabled addons: 
	I1222 00:24:09.681672 1440600 addons.go:530] duration metric: took 1m49.680036347s for enable addons: enabled=[]
	I1222 00:24:09.990006 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:09.990111 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:09.990435 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:10.489989 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:10.490125 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:10.490453 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:10.990344 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:10.990411 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:10.990682 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:10.990727 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:11.490593 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:11.490672 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:11.491056 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:11.990903 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:11.990982 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:11.991278 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:12.489936 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:12.490027 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:12.490339 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:12.990005 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:12.990116 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:12.990431 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:13.489991 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:13.490102 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:13.490441 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:13.490498 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:13.989928 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:13.990002 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:13.990306 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:14.489924 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:14.490000 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:14.490370 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:14.989952 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:14.990029 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:14.990377 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:15.489930 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:15.490007 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:15.495140 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	W1222 00:24:15.495205 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:15.990139 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:15.990214 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:15.990548 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:16.490265 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:16.490341 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:16.490685 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:16.990466 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:16.990534 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:16.990810 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:17.490605 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:17.490678 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:17.491024 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:17.990809 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:17.990888 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:17.991232 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:17.991290 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:18.489966 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:18.490047 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:18.490344 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:18.989974 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:18.990048 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:18.990413 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:19.490152 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:19.490235 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:19.490572 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:19.990193 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:19.990267 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:19.990595 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:20.490013 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:20.490112 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:20.490466 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:20.490529 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:20.990365 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:20.990452 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:20.990874 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:21.490647 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:21.490716 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:21.490974 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:21.990817 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:21.990890 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:21.991258 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:22.489990 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:22.490065 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:22.490412 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:22.989931 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:22.990001 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:22.990291 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:22.990334 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:23.489982 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:23.490072 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:23.490434 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:23.990185 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:23.990260 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:23.990618 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:24.490196 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:24.490263 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:24.490567 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:24.990006 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:24.990100 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:24.990427 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:24.990485 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:25.490302 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:25.490387 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:25.490733 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:25.990623 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:25.990702 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:25.990981 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:26.490787 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:26.490872 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:26.491243 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:26.989906 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:26.989979 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:26.990394 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:27.490103 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:27.490176 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:27.490443 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:27.490482 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:27.989960 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:27.990034 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:27.990413 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:28.490207 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:28.490281 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:28.490622 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:28.990191 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:28.990282 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:28.990631 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:29.490018 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:29.490117 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:29.490494 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:29.490550 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:29.990018 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:29.990122 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:29.990442 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:30.490142 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:30.490217 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:30.490543 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:30.990537 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:30.990608 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:30.990938 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:31.490581 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:31.490653 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:31.490983 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:31.491033 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:31.990778 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:31.990859 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:31.991138 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:32.489907 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:32.489978 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:32.490318 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:32.990007 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:32.990097 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:32.990421 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:33.490061 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:33.490158 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:33.490434 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:33.989970 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:33.990046 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:33.990432 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:33.990495 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:34.490195 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:34.490327 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:34.490668 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:34.990187 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:34.990257 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:34.990639 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:35.490386 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:35.490458 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:35.490812 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:35.990390 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:35.990464 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:35.990796 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:35.990852 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:36.490573 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:36.490651 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:36.490929 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:36.990756 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:36.990829 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:36.991171 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:37.489889 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:37.489967 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:37.490339 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:37.990024 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:37.990114 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:37.990447 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:38.489982 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:38.490058 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:38.490429 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:38.490484 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:38.990183 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:38.990264 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:38.990605 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:39.490191 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:39.490269 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:39.490588 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:39.990227 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:39.990305 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:39.990674 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:40.490443 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:40.490519 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:40.490858 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:40.490915 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:40.990684 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:40.990753 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:40.991032 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:41.490794 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:41.490870 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:41.491216 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:41.989955 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:41.990032 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:41.990370 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:42.489956 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:42.490024 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:42.490310 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:42.989997 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:42.990074 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:42.990458 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:42.990518 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:43.489986 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:43.490062 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:43.490435 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:43.990126 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:43.990202 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:43.990538 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:44.489985 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:44.490061 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:44.490436 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:44.990151 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:44.990227 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:44.990551 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:44.990601 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:45.490191 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:45.490261 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:45.490538 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:45.990641 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:45.990724 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:45.991072 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:46.491707 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:46.491786 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:46.492142 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:46.989879 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:46.989948 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:46.990262 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:47.489938 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:47.490019 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:47.490372 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:47.490436 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:47.989976 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:47.990058 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:47.990414 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:48.490104 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:48.490184 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:48.490506 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:48.989980 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:48.990052 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:48.990405 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:49.490140 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:49.490223 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:49.490576 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:49.490637 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:49.990206 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:49.990276 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:49.990555 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:50.489985 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:50.490065 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:50.490434 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:50.990242 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:50.990326 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:50.990658 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:51.490186 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:51.490262 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:51.490593 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:51.989990 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:51.990069 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:51.990435 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:51.990489 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:52.490183 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:52.490256 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:52.490603 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:52.990255 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:52.990324 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:52.990635 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:53.489964 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:53.490036 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:53.490363 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:53.990001 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:53.990075 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:53.990423 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:54.489924 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:54.489996 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:54.490285 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:54.490333 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:54.989975 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:54.990055 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:54.990468 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:55.490011 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:55.490103 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:55.490421 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:55.990314 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:55.990389 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:55.990657 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:56.490486 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:56.490557 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:56.490869 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:56.490916 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:56.990725 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:56.990802 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:56.991133 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:57.490900 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:57.490974 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:57.491290 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:57.989972 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:57.990051 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:57.990419 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:58.490136 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:58.490217 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:58.490543 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:58.990192 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:58.990263 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:58.990550 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:58.990605 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:59.490266 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:59.490351 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:59.490696 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:59.990527 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:59.990604 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:59.990950 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:00.490941 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:00.491023 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:00.491350 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:00.990417 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:00.990491 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:00.990807 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:00.990855 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:01.490637 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:01.490718 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:01.491102 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:01.990857 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:01.990933 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:01.991226 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:02.489934 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:02.490011 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:02.490376 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:02.990107 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:02.990182 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:02.990528 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:03.490191 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:03.490258 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:03.490603 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:03.490666 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:03.990323 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:03.990398 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:03.990774 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:04.490099 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:04.490181 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:04.490541 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:04.990232 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:04.990304 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:04.990598 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:05.489984 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:05.490070 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:05.490474 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:05.990399 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:05.990480 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:05.990832 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:05.990887 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:06.490635 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:06.490717 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:06.491014 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:06.990839 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:06.990944 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:06.991373 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:07.490014 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:07.490104 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:07.490446 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:07.990134 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:07.990205 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:07.990514 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:08.489962 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:08.490044 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:08.490424 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:08.490484 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:08.990161 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:08.990242 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:08.990563 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:09.490209 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:09.490287 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:09.490623 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:09.989971 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:09.990050 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:09.990419 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:10.490169 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:10.490256 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:10.490641 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:10.490691 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:10.990484 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:10.990556 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:10.990880 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:11.490691 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:11.490770 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:11.491148 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:11.989909 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:11.989998 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:11.990370 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:12.490057 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:12.490147 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:12.490467 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:12.990174 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:12.990252 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:12.990598 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:12.990662 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:13.490189 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:13.490279 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:13.490632 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:13.990187 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:13.990261 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:13.990530 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:14.490218 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:14.490295 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:14.490648 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:14.990225 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:14.990310 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:14.990701 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:14.990757 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:15.490513 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:15.490584 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:15.490919 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:15.990746 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:15.990844 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:15.991183 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:16.489918 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:16.490001 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:16.490332 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:16.989935 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:16.990006 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:16.990305 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:17.489916 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:17.489993 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:17.490353 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:17.490411 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:17.989982 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:17.990067 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:17.990440 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:18.489992 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:18.490059 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:18.490344 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:18.989986 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:18.990062 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:18.990446 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:19.489988 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:19.490067 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:19.490439 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:19.490499 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:19.990000 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:19.990097 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:19.990384 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:20.490061 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:20.490152 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:20.490473 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:20.990446 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:20.990524 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:20.990901 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:21.490562 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:21.490638 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:21.490935 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:21.490983 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:21.990766 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:21.990844 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:21.991203 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:22.489952 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:22.490028 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:22.490383 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:22.989946 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:22.990018 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:22.990395 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:23.490115 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:23.490193 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:23.490519 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:23.990250 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:23.990328 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:23.990658 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:23.990712 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:24.490191 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:24.490259 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:24.490564 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:24.990067 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:24.990171 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:24.990519 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:25.490112 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:25.490189 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:25.490544 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:25.990337 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:25.990408 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:25.990679 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:26.490001 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:26.490100 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:26.490493 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:26.490549 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:26.989985 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:26.990060 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:26.990450 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:27.489933 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:27.490004 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:27.490303 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:27.989990 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:27.990064 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:27.990419 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:28.489996 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:28.490100 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:28.490409 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:28.989926 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:28.990004 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:28.990293 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:28.990335 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:29.489955 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:29.490038 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:29.490419 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:29.989986 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:29.990064 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:29.990428 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:30.490113 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:30.490182 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:30.490466 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:30.990514 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:30.990608 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:30.990968 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:30.991038 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:31.490801 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:31.490876 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:31.491238 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:31.989938 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:31.990010 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:31.990319 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:32.490105 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:32.490200 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:32.490583 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:32.990011 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:32.990107 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:32.990457 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:33.489930 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:33.490001 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:33.490356 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:33.490407 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:33.989968 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:33.990049 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:33.990398 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:34.490108 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:34.490195 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:34.490523 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:34.989925 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:34.990023 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:34.990366 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:35.489971 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:35.490049 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:35.490369 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:35.990125 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:35.990203 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:35.990545 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:35.990599 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:36.490187 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:36.490257 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:36.490569 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:36.990016 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:36.990105 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:36.990430 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:37.490182 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:37.490252 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:37.490574 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:37.990191 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:37.990266 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:37.990607 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:37.990658 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:38.490325 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:38.490406 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:38.490727 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:38.990379 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:38.990457 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:38.990798 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:39.490205 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:39.490279 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:39.490564 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:39.989982 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:39.990073 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:39.990463 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:40.489978 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:40.490060 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:40.490394 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:40.490452 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:40.990349 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:40.990422 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:40.990727 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:41.490014 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:41.490108 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:41.490471 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:41.989961 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:41.990033 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:41.990356 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:42.489960 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:42.490027 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:42.490300 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:42.990007 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:42.990099 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:42.990440 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:42.990502 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:43.490037 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:43.490130 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:43.490432 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:43.989985 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:43.990105 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:43.990442 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:44.489978 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:44.490053 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:44.490424 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:44.990010 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:44.990121 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:44.990473 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:44.990528 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:45.490187 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:45.490258 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:45.490577 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:45.990696 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:45.990768 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:45.991105 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:46.490888 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:46.490961 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:46.491348 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:46.989888 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:46.989967 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:46.990307 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:47.489963 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:47.490035 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:47.490377 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:47.490435 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:47.990002 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:47.990111 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:47.990456 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:48.490130 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:48.490210 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:48.490499 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:48.990197 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:48.990271 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:48.990616 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:49.490301 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:49.490378 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:49.490708 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:49.490767 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:49.990515 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:49.990590 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:49.990888 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:50.490679 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:50.490750 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:50.491091 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:50.990830 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:50.990903 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:50.991524 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:51.490235 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:51.490311 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:51.490637 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:51.990347 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:51.990422 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:51.990726 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:51.990776 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:52.490531 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:52.490608 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:52.490905 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:52.990690 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:52.990761 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:52.991011 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:53.490818 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:53.490896 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:53.491226 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:53.989948 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:53.990031 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:53.990354 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:54.489938 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:54.490015 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:54.490377 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:54.490441 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:54.990011 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:54.990111 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:54.990418 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:55.489958 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:55.490031 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:55.490422 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:55.990385 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:55.990459 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:55.990735 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:56.490568 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:56.490669 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:56.490961 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:56.491006 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:56.990756 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:56.990829 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:56.991170 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:57.489861 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:57.489931 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:57.490260 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:57.989963 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:57.990042 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:57.990407 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:58.490137 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:58.490214 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:58.490541 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:58.990214 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:58.990306 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:58.990624 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:58.990667 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:59.489984 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:59.490062 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:59.490433 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:59.990163 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:59.990236 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:59.990615 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:00.490206 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:00.490289 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:00.490597 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:00.990658 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:00.990731 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:00.991092 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:00.991152 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:01.490884 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:01.490964 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:01.491316 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:01.990041 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:01.990133 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:01.990455 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:02.490830 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:02.490906 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:02.491270 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:02.989994 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:02.990098 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:02.990430 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:03.489931 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:03.490002 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:03.490337 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:03.490390 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:03.990024 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:03.990121 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:03.990467 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:04.490179 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:04.490255 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:04.490608 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:04.990184 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:04.990260 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:04.990581 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:05.489977 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:05.490051 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:05.490398 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:05.490456 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:05.990446 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:05.990523 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:05.990849 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:06.490609 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:06.490678 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:06.490954 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:06.990809 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:06.990889 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:06.991238 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:07.489962 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:07.490037 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:07.490389 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:07.989985 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:07.990051 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:07.990334 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:07.990374 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:08.490043 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:08.490140 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:08.490496 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:08.989967 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:08.990041 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:08.990388 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:09.490116 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:09.490235 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:09.490498 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:09.990254 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:09.990339 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:09.990808 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:09.990886 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:10.490650 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:10.490731 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:10.491042 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:10.990825 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:10.990904 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:10.991173 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:11.489852 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:11.489924 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:11.490252 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:11.989997 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:11.990073 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:11.990444 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:12.490161 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:12.490230 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:12.490508 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:12.490550 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:12.989975 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:12.990050 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:12.990399 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:13.489977 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:13.490061 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:13.490418 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:13.989935 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:13.990009 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:13.990308 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:14.490050 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:14.490144 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:14.490473 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:14.989961 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:14.990046 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:14.990441 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:14.990497 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:15.490170 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:15.490244 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:15.490586 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:15.990615 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:15.990697 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:15.991007 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:16.490796 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:16.490870 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:16.491907 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:16.990669 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:16.990740 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:16.991032 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:16.991078 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:17.490833 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:17.490913 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:17.491260 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:17.989966 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:17.990045 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:17.990373 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:18.489932 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:18.490002 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:18.490301 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:18.989992 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:18.990073 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:18.990433 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:19.490149 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:19.490224 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:19.490593 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:19.490656 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:19.990226 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:19.990300 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:19.990617 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:20.489972 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:20.490047 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:20.490362 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:20.990101 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:20.990177 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:20.990509 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:21.490185 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:21.490264 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:21.490595 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:21.989954 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:21.990032 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:21.990395 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:21.990455 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:22.489962 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:22.490045 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:22.490385 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:22.989930 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:22.990003 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:22.990341 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:23.490027 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:23.490129 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:23.490477 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:23.990190 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:23.990277 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:23.990691 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:23.990747 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:24.490514 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:24.490583 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:24.490927 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:24.990720 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:24.990794 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:24.991140 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:25.490900 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:25.490979 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:25.491379 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:25.990094 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:25.990175 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:25.990521 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:26.490373 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:26.490449 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:26.490797 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:26.490855 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:26.990580 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:26.990656 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:26.991034 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:27.490724 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:27.490790 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:27.491046 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:27.990825 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:27.990904 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:27.991259 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:28.490911 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:28.490985 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:28.491318 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:28.491372 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:28.989928 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:28.990007 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:28.990342 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:29.489980 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:29.490066 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:29.490444 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:29.989982 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:29.990058 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:29.990411 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:30.490133 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:30.490213 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:30.490478 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:30.990413 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:30.990487 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:30.990822 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:30.990881 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:31.490635 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:31.490709 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:31.491051 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:31.990823 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:31.990893 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:31.991165 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:32.490952 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:32.491029 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:32.491373 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:32.989955 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:32.990035 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:32.990378 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:33.489931 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:33.490002 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:33.490316 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:33.490362 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:33.989947 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:33.990021 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:33.990372 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:34.490116 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:34.490197 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:34.490500 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:34.989909 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:34.989977 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:34.990263 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:35.489946 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:35.490026 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:35.490341 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:35.490389 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:35.990283 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:35.990360 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:35.990662 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:36.490184 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:36.490280 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:36.490578 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:36.990001 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:36.990095 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:36.990429 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:37.490145 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:37.490220 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:37.490554 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:37.490609 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:37.990015 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:37.990097 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:37.990389 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:38.490110 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:38.490185 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:38.490492 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:38.990019 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:38.990118 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:38.990456 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:39.490153 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:39.490226 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:39.490496 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:39.990199 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:39.990276 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:39.990657 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:39.990715 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:40.490392 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:40.490536 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:40.490920 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:40.990760 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:40.990841 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:40.991131 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:41.490923 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:41.490995 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:41.491302 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:41.990033 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:41.990133 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:41.990472 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:42.490153 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:42.490221 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:42.490485 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:42.490527 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:42.990002 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:42.990101 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:42.990413 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:43.489980 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:43.490064 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:43.490432 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:43.989938 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:43.990011 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:43.990364 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:44.489959 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:44.490042 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:44.490416 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:44.989995 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:44.990102 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:44.990462 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:44.990519 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:45.490179 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:45.490260 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:45.490574 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:45.990602 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:45.990682 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:45.991037 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:46.490835 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:46.490909 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:46.491279 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:46.989941 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:46.990018 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:46.990385 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:47.489947 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:47.490027 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:47.490374 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:47.490435 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:47.989971 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:47.990053 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:47.990422 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:48.490143 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:48.490233 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:48.490499 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:48.990174 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:48.990256 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:48.990580 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:49.490307 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:49.490383 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:49.490720 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:49.490772 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:49.990521 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:49.990599 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:49.990879 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:50.490635 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:50.490716 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:50.491079 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:50.990724 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:50.990795 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:50.991088 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:51.490793 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:51.490872 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:51.491153 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:51.491198 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:51.989975 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:51.990061 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:51.990446 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:52.489977 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:52.490057 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:52.490428 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:52.990118 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:52.990246 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:52.990561 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:53.489966 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:53.490052 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:53.490415 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:53.989985 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:53.990063 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:53.990421 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:53.990481 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:54.489936 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:54.490008 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:54.490340 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:54.989971 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:54.990045 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:54.990421 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:55.490133 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:55.490213 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:55.490553 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:55.990361 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:55.990433 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:55.990726 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:55.990770 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:56.490522 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:56.490596 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:56.490941 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:56.990624 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:56.990700 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:56.991017 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:57.490621 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:57.490692 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:57.490956 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:57.990832 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:57.990908 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:57.991282 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:57.991347 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:58.489976 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:58.490053 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:58.490386 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:58.989930 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:58.989998 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:58.990284 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:59.489983 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:59.490073 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:59.490464 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:59.990192 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:59.990275 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:59.990617 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:00.490281 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:00.490364 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:00.490677 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:00.490726 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:00.990700 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:00.990777 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:00.991140 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:01.490816 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:01.490903 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:01.491267 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:01.989976 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:01.990054 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:01.990417 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:02.489984 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:02.490060 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:02.490437 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:02.990175 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:02.990253 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:02.990605 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:02.990670 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:03.490193 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:03.490272 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:03.490631 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:03.990322 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:03.990405 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:03.990824 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:04.490523 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:04.490601 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:04.490958 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:04.990688 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:04.990756 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:04.991031 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:04.991073 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:05.490786 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:05.490863 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:05.491193 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:05.990885 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:05.990960 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:05.991336 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:06.489989 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:06.490060 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:06.490367 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:06.990048 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:06.990148 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:06.990505 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:07.490226 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:07.490304 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:07.490653 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:07.490707 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:07.990247 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:07.990320 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:07.990637 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:08.489976 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:08.490052 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:08.490406 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:08.990024 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:08.990129 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:08.990481 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:09.489938 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:09.490010 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:09.490353 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:09.989955 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:09.990030 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:09.990415 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:09.990473 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:10.490149 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:10.490228 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:10.490601 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:10.990609 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:10.990681 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:10.990963 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:11.490826 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:11.490912 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:11.491261 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:11.989991 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:11.990068 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:11.990456 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:11.990514 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:12.489967 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:12.490038 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:12.490323 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:12.989997 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:12.990096 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:12.990418 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:13.489963 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:13.490038 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:13.490407 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:13.990095 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:13.990171 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:13.990492 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:13.990549 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:14.489992 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:14.490067 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:14.490428 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:14.990144 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:14.990224 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:14.990592 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:15.490207 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:15.490277 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:15.490570 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:15.990543 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:15.990628 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:15.991069 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:15.991135 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:16.490872 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:16.490956 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:16.491310 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:16.989967 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:16.990053 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:16.990388 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:17.490123 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:17.490206 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:17.490561 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:17.990295 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:17.990377 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:17.990730 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:18.490453 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:18.490522 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:18.490787 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:18.490828 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:18.990605 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:18.990682 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:18.991041 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:19.490876 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:19.490953 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:19.491342 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:19.990007 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:19.990107 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:19.990413 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:20.489981 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:20.490059 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:20.490407 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:20.990447 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:20.990519 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:20.990864 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:20.990919 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:21.490621 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:21.490700 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:21.490968 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:21.990741 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:21.990818 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:21.991152 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:22.490904 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:22.490981 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:22.491320 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:22.989871 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:22.989939 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:22.990221 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:23.489972 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:23.490045 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:23.490356 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:23.490418 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:23.989980 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:23.990055 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:23.990419 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:24.489968 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:24.490036 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:24.490402 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:24.989977 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:24.990052 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:24.990381 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:25.489961 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:25.490041 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:25.490383 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:25.989925 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:25.989999 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:25.990314 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:25.990363 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:26.490009 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:26.490096 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:26.490426 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:26.990011 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:26.990100 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:26.990405 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:27.490090 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:27.490172 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:27.490501 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:27.989983 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:27.990059 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:27.990431 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:27.990490 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:28.490022 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:28.490121 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:28.490494 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:28.990112 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:28.990185 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:28.990505 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:29.490221 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:29.490292 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:29.490664 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:29.990003 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:29.990074 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:29.990426 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:30.490150 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:30.490225 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:30.490541 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:30.490592 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:30.990489 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:30.990567 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:30.990902 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:31.490527 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:31.490606 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:31.490937 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:31.990701 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:31.990772 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:31.991052 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:32.490780 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:32.490863 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:32.491194 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:32.491251 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:32.989979 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:32.990061 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:32.990468 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:33.489931 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:33.490000 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:33.490333 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:33.989999 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:33.990090 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:33.990434 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:34.490164 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:34.490237 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:34.490525 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:34.990013 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:34.990133 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:34.990419 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:34.990463 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:35.489998 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:35.490072 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:35.490402 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:35.990424 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:35.990500 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:35.990847 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:36.490629 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:36.490699 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:36.491002 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:36.990786 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:36.990862 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:36.991205 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:36.991272 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:37.489933 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:37.490007 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:37.490341 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:37.990047 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:37.990142 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:37.990426 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:38.489979 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:38.490062 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:38.490435 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:38.990029 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:38.990127 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:38.990498 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:39.490187 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:39.490257 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:39.490523 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:39.490565 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:39.989972 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:39.990052 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:39.990405 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:40.490020 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:40.490112 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:40.490458 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:40.990313 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:40.990393 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:40.990738 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:41.490512 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:41.490590 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:41.490933 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:41.490989 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:41.990751 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:41.990828 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:41.991149 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:42.489855 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:42.489927 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:42.490220 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:42.989969 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:42.990051 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:42.990392 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:43.489963 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:43.490039 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:43.490355 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:43.989943 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:43.990021 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:43.990364 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:43.990421 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:44.490068 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:44.490167 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:44.490496 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:44.989975 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:44.990048 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:44.990405 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:45.490098 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:45.490165 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:45.490445 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:45.990499 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:45.990579 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:45.990932 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:45.990988 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:46.490765 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:46.490851 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:46.491199 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:46.989903 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:46.989975 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:46.990267 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:47.489959 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:47.490040 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:47.490410 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:47.990193 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:47.990272 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:47.990633 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:48.490191 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:48.490268 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:48.490557 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:48.490605 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:48.989973 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:48.990054 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:48.990411 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:49.490121 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:49.490203 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:49.490602 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:49.990180 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:49.990256 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:49.990573 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:50.490264 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:50.490334 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:50.490701 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:50.490762 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:50.990600 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:50.990676 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:50.991023 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:51.490816 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:51.490888 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:51.491166 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:51.989883 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:51.989958 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:51.990326 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:52.489958 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:52.490029 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:52.490386 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:52.989930 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:52.990008 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:52.990311 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:52.990362 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:53.489946 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:53.490023 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:53.490371 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:53.989949 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:53.990030 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:53.990375 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:54.490788 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:54.490862 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:54.491123 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:54.990890 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:54.990969 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:54.991274 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:54.991322 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:55.489946 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:55.490034 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:55.490395 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:55.990183 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:55.990253 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:55.990532 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:56.490206 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:56.490281 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:56.490594 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:56.989987 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:56.990060 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:56.990406 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:57.490128 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:57.490205 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:57.490478 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:57.490521 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:57.990207 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:57.990282 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:57.990685 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:58.490512 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:58.490591 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:58.490950 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:58.990719 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:58.990795 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:58.991070 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:59.490852 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:59.490926 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:59.491272 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:59.491325 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:59.989994 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:59.990075 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:59.990477 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:00.491169 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:00.491258 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:00.491580 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:00.990547 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:00.990624 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:00.991006 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:01.490798 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:01.490875 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:01.491244 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:01.989992 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:01.990060 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:01.990372 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:01.990421 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:02.490072 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:02.490171 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:02.490504 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:02.990211 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:02.990296 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:02.990636 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:03.490179 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:03.490250 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:03.490508 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:03.989979 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:03.990056 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:03.990393 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:03.990451 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:04.489988 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:04.490064 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:04.490417 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:04.989923 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:04.990006 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:04.990341 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:05.490018 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:05.490111 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:05.490452 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:05.989994 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:05.990072 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:05.990422 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:05.990476 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:06.490127 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:06.490204 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:06.490473 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:06.989958 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:06.990037 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:06.990380 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:07.489982 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:07.490060 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:07.490431 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:07.990121 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:07.990189 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:07.990482 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:07.990525 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:08.489974 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:08.490066 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:08.490425 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:08.990156 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:08.990234 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:08.990576 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:09.490202 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:09.490271 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:09.490542 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:09.989972 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:09.990045 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:09.990377 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:10.490071 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:10.490165 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:10.490532 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:10.490597 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:10.990314 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:10.990397 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:10.990739 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:11.490506 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:11.490591 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:11.490943 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:11.990748 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:11.990830 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:11.991167 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:12.489852 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:12.489923 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:12.490225 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:12.989970 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:12.990044 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:12.990403 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:12.990470 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:13.489984 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:13.490059 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:13.490416 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:13.990123 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:13.990198 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:13.990462 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:14.489978 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:14.490054 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:14.490452 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:14.990189 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:14.990264 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:14.990627 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:14.990691 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:15.490186 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:15.490254 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:15.490558 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:15.990404 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:15.990479 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:15.990821 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:16.490635 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:16.490717 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:16.491027 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:16.990834 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:16.990930 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:16.991327 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:16.991378 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:17.489874 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:17.489955 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:17.490319 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:17.990029 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:17.990129 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:17.990461 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:18.489952 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:18.490024 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:18.490359 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:18.989974 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:18.990051 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:18.990415 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:19.489997 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:19.490101 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:19.490461 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:19.490518 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:19.990172 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:19.990243 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:19.990549 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:20.489961 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:20.490039 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:20.490384 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:20.990305 1440600 node_ready.go:38] duration metric: took 6m0.000552396s for node "functional-973657" to be "Ready" ...
	I1222 00:28:20.993510 1440600 out.go:203] 
	W1222 00:28:20.996431 1440600 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1222 00:28:20.996456 1440600 out.go:285] * 
	W1222 00:28:20.998594 1440600 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1222 00:28:21.002257 1440600 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 22 00:28:28 functional-973657 containerd[5251]: time="2025-12-22T00:28:28.341976059Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 22 00:28:29 functional-973657 containerd[5251]: time="2025-12-22T00:28:29.376832555Z" level=info msg="No images store for sha256:a1f83055284ec302ac691d8677946d8b4e772fb7071d39ada1cc9184cb70814b"
	Dec 22 00:28:29 functional-973657 containerd[5251]: time="2025-12-22T00:28:29.379095838Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:latest\""
	Dec 22 00:28:29 functional-973657 containerd[5251]: time="2025-12-22T00:28:29.386895507Z" level=info msg="ImageCreate event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 22 00:28:29 functional-973657 containerd[5251]: time="2025-12-22T00:28:29.387397526Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 22 00:28:30 functional-973657 containerd[5251]: time="2025-12-22T00:28:30.413533041Z" level=info msg="No images store for sha256:752b9ba1e553bacfdee75fccc26fb899d1f930e210eb3b7f0c4eebd90988bda3"
	Dec 22 00:28:30 functional-973657 containerd[5251]: time="2025-12-22T00:28:30.416392292Z" level=info msg="ImageCreate event name:\"docker.io/library/minikube-local-cache-test:functional-973657\""
	Dec 22 00:28:30 functional-973657 containerd[5251]: time="2025-12-22T00:28:30.425048683Z" level=info msg="ImageCreate event name:\"sha256:082049fa7835e23c46c09f80be520b2afb0d7d032957be9d461df564fef85ac1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 22 00:28:30 functional-973657 containerd[5251]: time="2025-12-22T00:28:30.425530361Z" level=info msg="ImageUpdate event name:\"docker.io/library/minikube-local-cache-test:functional-973657\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 22 00:28:31 functional-973657 containerd[5251]: time="2025-12-22T00:28:31.218038733Z" level=info msg="RemoveImage \"registry.k8s.io/pause:latest\""
	Dec 22 00:28:31 functional-973657 containerd[5251]: time="2025-12-22T00:28:31.220631093Z" level=info msg="ImageDelete event name:\"registry.k8s.io/pause:latest\""
	Dec 22 00:28:31 functional-973657 containerd[5251]: time="2025-12-22T00:28:31.223026347Z" level=info msg="ImageDelete event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\""
	Dec 22 00:28:31 functional-973657 containerd[5251]: time="2025-12-22T00:28:31.235865776Z" level=info msg="RemoveImage \"registry.k8s.io/pause:latest\" returns successfully"
	Dec 22 00:28:32 functional-973657 containerd[5251]: time="2025-12-22T00:28:32.124332900Z" level=info msg="RemoveImage \"registry.k8s.io/pause:3.1\""
	Dec 22 00:28:32 functional-973657 containerd[5251]: time="2025-12-22T00:28:32.126924874Z" level=info msg="ImageDelete event name:\"registry.k8s.io/pause:3.1\""
	Dec 22 00:28:32 functional-973657 containerd[5251]: time="2025-12-22T00:28:32.130007191Z" level=info msg="ImageDelete event name:\"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5\""
	Dec 22 00:28:32 functional-973657 containerd[5251]: time="2025-12-22T00:28:32.136871755Z" level=info msg="RemoveImage \"registry.k8s.io/pause:3.1\" returns successfully"
	Dec 22 00:28:32 functional-973657 containerd[5251]: time="2025-12-22T00:28:32.303412403Z" level=info msg="No images store for sha256:a1f83055284ec302ac691d8677946d8b4e772fb7071d39ada1cc9184cb70814b"
	Dec 22 00:28:32 functional-973657 containerd[5251]: time="2025-12-22T00:28:32.305607853Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:latest\""
	Dec 22 00:28:32 functional-973657 containerd[5251]: time="2025-12-22T00:28:32.312621694Z" level=info msg="ImageCreate event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 22 00:28:32 functional-973657 containerd[5251]: time="2025-12-22T00:28:32.313589965Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 22 00:28:32 functional-973657 containerd[5251]: time="2025-12-22T00:28:32.438075263Z" level=info msg="No images store for sha256:3ac89611d5efd8eb74174b1f04c33b7e73b651cec35b5498caf0cfdd2efd7d48"
	Dec 22 00:28:32 functional-973657 containerd[5251]: time="2025-12-22T00:28:32.440318607Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.1\""
	Dec 22 00:28:32 functional-973657 containerd[5251]: time="2025-12-22T00:28:32.450859601Z" level=info msg="ImageCreate event name:\"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 22 00:28:32 functional-973657 containerd[5251]: time="2025-12-22T00:28:32.451513802Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:28:34.216670    9260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:28:34.217038    9260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:28:34.218504    9260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:28:34.218833    9260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:28:34.220404    9260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec21 21:36] overlayfs: idmapped layers are currently not supported
	[ +34.220408] overlayfs: idmapped layers are currently not supported
	[Dec21 21:37] overlayfs: idmapped layers are currently not supported
	[Dec21 21:38] overlayfs: idmapped layers are currently not supported
	[Dec21 21:39] overlayfs: idmapped layers are currently not supported
	[ +42.728151] overlayfs: idmapped layers are currently not supported
	[Dec21 21:40] overlayfs: idmapped layers are currently not supported
	[Dec21 21:42] overlayfs: idmapped layers are currently not supported
	[Dec21 21:43] overlayfs: idmapped layers are currently not supported
	[Dec21 22:01] overlayfs: idmapped layers are currently not supported
	[Dec21 22:03] overlayfs: idmapped layers are currently not supported
	[Dec21 22:04] overlayfs: idmapped layers are currently not supported
	[Dec21 22:06] overlayfs: idmapped layers are currently not supported
	[Dec21 22:07] overlayfs: idmapped layers are currently not supported
	[Dec21 22:09] kauditd_printk_skb: 8 callbacks suppressed
	[Dec21 22:19] FS-Cache: Duplicate cookie detected
	[  +0.000799] FS-Cache: O-cookie c=000001b7 [p=00000002 fl=222 nc=0 na=1]
	[  +0.000997] FS-Cache: O-cookie d=000000006644c6a1{9P.session} n=0000000059d48210
	[  +0.001156] FS-Cache: O-key=[10] '34333231303139373837'
	[  +0.000780] FS-Cache: N-cookie c=000001b8 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000956] FS-Cache: N-cookie d=000000006644c6a1{9P.session} n=000000007a8030ee
	[  +0.001187] FS-Cache: N-key=[10] '34333231303139373837'
	[Dec22 00:03] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 00:28:34 up 1 day,  7:11,  0 user,  load average: 0.32, 0.31, 0.87
	Linux functional-973657 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 22 00:28:31 functional-973657 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:28:31 functional-973657 kubelet[9015]: E1222 00:28:31.306691    9015 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 00:28:31 functional-973657 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 00:28:31 functional-973657 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 00:28:31 functional-973657 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 823.
	Dec 22 00:28:31 functional-973657 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:28:31 functional-973657 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:28:32 functional-973657 kubelet[9054]: E1222 00:28:32.059285    9054 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 00:28:32 functional-973657 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 00:28:32 functional-973657 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 00:28:32 functional-973657 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 824.
	Dec 22 00:28:32 functional-973657 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:28:32 functional-973657 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:28:32 functional-973657 kubelet[9158]: E1222 00:28:32.816174    9158 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 00:28:32 functional-973657 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 00:28:32 functional-973657 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 00:28:33 functional-973657 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 825.
	Dec 22 00:28:33 functional-973657 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:28:33 functional-973657 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:28:33 functional-973657 kubelet[9179]: E1222 00:28:33.557984    9179 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 00:28:33 functional-973657 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 00:28:33 functional-973657 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 00:28:34 functional-973657 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 826.
	Dec 22 00:28:34 functional-973657 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:28:34 functional-973657 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-973657 -n functional-973657
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-973657 -n functional-973657: exit status 2 (372.115854ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-973657" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd (2.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly (2.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-973657 get pods
functional_test.go:756: (dbg) Non-zero exit: out/kubectl --context functional-973657 get pods: exit status 1 (117.626372ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:759: failed to run kubectl directly. args "out/kubectl --context functional-973657 get pods": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-973657
helpers_test.go:244: (dbg) docker inspect functional-973657:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "66803363da2c02a83814dda1c0764d3abdab5acc630ac08f6a997102221d51a1",
	        "Created": "2025-12-22T00:13:58.968084222Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1435135,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-22T00:13:59.032592154Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:065a636b8735485f57df2b02ed6532902f189a9c5dc304ae0ae68a778e1c9b2c",
	        "ResolvConfPath": "/var/lib/docker/containers/66803363da2c02a83814dda1c0764d3abdab5acc630ac08f6a997102221d51a1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/66803363da2c02a83814dda1c0764d3abdab5acc630ac08f6a997102221d51a1/hostname",
	        "HostsPath": "/var/lib/docker/containers/66803363da2c02a83814dda1c0764d3abdab5acc630ac08f6a997102221d51a1/hosts",
	        "LogPath": "/var/lib/docker/containers/66803363da2c02a83814dda1c0764d3abdab5acc630ac08f6a997102221d51a1/66803363da2c02a83814dda1c0764d3abdab5acc630ac08f6a997102221d51a1-json.log",
	        "Name": "/functional-973657",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-973657:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-973657",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "66803363da2c02a83814dda1c0764d3abdab5acc630ac08f6a997102221d51a1",
	                "LowerDir": "/var/lib/docker/overlay2/5e1ad55fc7940958673405b2a5d9d7701d300a0b94ebc0c871b8eb28331634c7-init/diff:/var/lib/docker/overlay2/fdfc0dccbb3b6766b7981a5669907f62e120bbe774767ccbee34e6115374625f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5e1ad55fc7940958673405b2a5d9d7701d300a0b94ebc0c871b8eb28331634c7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5e1ad55fc7940958673405b2a5d9d7701d300a0b94ebc0c871b8eb28331634c7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5e1ad55fc7940958673405b2a5d9d7701d300a0b94ebc0c871b8eb28331634c7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-973657",
	                "Source": "/var/lib/docker/volumes/functional-973657/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-973657",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-973657",
	                "name.minikube.sigs.k8s.io": "functional-973657",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9a7415dc7cc8c69d402ee20ae768f9dea6a8f4f19a78dc21532ae8d42f3e7899",
	            "SandboxKey": "/var/run/docker/netns/9a7415dc7cc8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38390"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38391"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38394"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38392"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38393"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-973657": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ee:06:b1:ad:4a:31",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1e42e4d66b505bfd04d5446c52717be20340997733545e1d4203ef38f80c0dbb",
	                    "EndpointID": "cfb1e4a2d5409c3c16cc85466e1253884fb8124967c81f53a3b06011e2792928",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-973657",
	                        "66803363da2c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-973657 -n functional-973657
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-973657 -n functional-973657: exit status 2 (302.255884ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                         ARGS                                                                          │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-722318 image ls --format short --alsologtostderr                                                                                           │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ 22 Dec 25 00:13 UTC │
	│ image   │ functional-722318 image ls --format yaml --alsologtostderr                                                                                            │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ 22 Dec 25 00:13 UTC │
	│ ssh     │ functional-722318 ssh pgrep buildkitd                                                                                                                 │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │                     │
	│ image   │ functional-722318 image build -t localhost/my-image:functional-722318 testdata/build --alsologtostderr                                                │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ 22 Dec 25 00:13 UTC │
	│ image   │ functional-722318 image ls --format json --alsologtostderr                                                                                            │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ 22 Dec 25 00:13 UTC │
	│ image   │ functional-722318 image ls --format table --alsologtostderr                                                                                           │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ 22 Dec 25 00:13 UTC │
	│ image   │ functional-722318 image ls                                                                                                                            │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ 22 Dec 25 00:13 UTC │
	│ delete  │ -p functional-722318                                                                                                                                  │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ 22 Dec 25 00:13 UTC │
	│ start   │ -p functional-973657 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │                     │
	│ start   │ -p functional-973657 --alsologtostderr -v=8                                                                                                           │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:22 UTC │                     │
	│ cache   │ functional-973657 cache add registry.k8s.io/pause:3.1                                                                                                 │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:28 UTC │ 22 Dec 25 00:28 UTC │
	│ cache   │ functional-973657 cache add registry.k8s.io/pause:3.3                                                                                                 │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:28 UTC │ 22 Dec 25 00:28 UTC │
	│ cache   │ functional-973657 cache add registry.k8s.io/pause:latest                                                                                              │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:28 UTC │ 22 Dec 25 00:28 UTC │
	│ cache   │ functional-973657 cache add minikube-local-cache-test:functional-973657                                                                               │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:28 UTC │ 22 Dec 25 00:28 UTC │
	│ cache   │ functional-973657 cache delete minikube-local-cache-test:functional-973657                                                                            │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:28 UTC │ 22 Dec 25 00:28 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                      │ minikube          │ jenkins │ v1.37.0 │ 22 Dec 25 00:28 UTC │ 22 Dec 25 00:28 UTC │
	│ cache   │ list                                                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 22 Dec 25 00:28 UTC │ 22 Dec 25 00:28 UTC │
	│ ssh     │ functional-973657 ssh sudo crictl images                                                                                                              │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:28 UTC │ 22 Dec 25 00:28 UTC │
	│ ssh     │ functional-973657 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                    │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:28 UTC │ 22 Dec 25 00:28 UTC │
	│ ssh     │ functional-973657 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                               │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:28 UTC │                     │
	│ cache   │ functional-973657 cache reload                                                                                                                        │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:28 UTC │ 22 Dec 25 00:28 UTC │
	│ ssh     │ functional-973657 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                               │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:28 UTC │ 22 Dec 25 00:28 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                      │ minikube          │ jenkins │ v1.37.0 │ 22 Dec 25 00:28 UTC │ 22 Dec 25 00:28 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                   │ minikube          │ jenkins │ v1.37.0 │ 22 Dec 25 00:28 UTC │ 22 Dec 25 00:28 UTC │
	│ kubectl │ functional-973657 kubectl -- --context functional-973657 get pods                                                                                     │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:28 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/22 00:22:15
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1222 00:22:15.745982 1440600 out.go:360] Setting OutFile to fd 1 ...
	I1222 00:22:15.746211 1440600 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:22:15.746249 1440600 out.go:374] Setting ErrFile to fd 2...
	I1222 00:22:15.746270 1440600 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:22:15.746555 1440600 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1395000/.minikube/bin
	I1222 00:22:15.747001 1440600 out.go:368] Setting JSON to false
	I1222 00:22:15.747938 1440600 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":111889,"bootTime":1766251047,"procs":168,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1222 00:22:15.748043 1440600 start.go:143] virtualization:  
	I1222 00:22:15.753569 1440600 out.go:179] * [functional-973657] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1222 00:22:15.756598 1440600 out.go:179]   - MINIKUBE_LOCATION=22179
	I1222 00:22:15.756741 1440600 notify.go:221] Checking for updates...
	I1222 00:22:15.762722 1440600 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 00:22:15.765671 1440600 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-1395000/kubeconfig
	I1222 00:22:15.768657 1440600 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1395000/.minikube
	I1222 00:22:15.771623 1440600 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1222 00:22:15.774619 1440600 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 00:22:15.777830 1440600 config.go:182] Loaded profile config "functional-973657": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1222 00:22:15.777978 1440600 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 00:22:15.812917 1440600 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1222 00:22:15.813051 1440600 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 00:22:15.874179 1440600 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 00:22:15.864674601 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 00:22:15.874289 1440600 docker.go:319] overlay module found
	I1222 00:22:15.877302 1440600 out.go:179] * Using the docker driver based on existing profile
	I1222 00:22:15.880104 1440600 start.go:309] selected driver: docker
	I1222 00:22:15.880124 1440600 start.go:928] validating driver "docker" against &{Name:functional-973657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-973657 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableC
oreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 00:22:15.880226 1440600 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 00:22:15.880331 1440600 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 00:22:15.936346 1440600 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 00:22:15.927222796 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 00:22:15.936748 1440600 cni.go:84] Creating CNI manager for ""
	I1222 00:22:15.936818 1440600 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1222 00:22:15.936877 1440600 start.go:353] cluster config:
	{Name:functional-973657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-973657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 00:22:15.939915 1440600 out.go:179] * Starting "functional-973657" primary control-plane node in "functional-973657" cluster
	I1222 00:22:15.942690 1440600 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1222 00:22:15.945666 1440600 out.go:179] * Pulling base image v0.0.48-1766219634-22260 ...
	I1222 00:22:15.948535 1440600 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1222 00:22:15.948600 1440600 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4
	I1222 00:22:15.948615 1440600 cache.go:65] Caching tarball of preloaded images
	I1222 00:22:15.948645 1440600 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1222 00:22:15.948702 1440600 preload.go:251] Found /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1222 00:22:15.948713 1440600 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on containerd
	I1222 00:22:15.948830 1440600 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/config.json ...
	I1222 00:22:15.969249 1440600 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon, skipping pull
	I1222 00:22:15.969274 1440600 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 exists in daemon, skipping load
	I1222 00:22:15.969294 1440600 cache.go:243] Successfully downloaded all kic artifacts
	I1222 00:22:15.969326 1440600 start.go:360] acquireMachinesLock for functional-973657: {Name:mk23c5c2a3abc92310900db50002bad061b76c2b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 00:22:15.969396 1440600 start.go:364] duration metric: took 41.633µs to acquireMachinesLock for "functional-973657"
	I1222 00:22:15.969420 1440600 start.go:96] Skipping create...Using existing machine configuration
	I1222 00:22:15.969432 1440600 fix.go:54] fixHost starting: 
	I1222 00:22:15.969697 1440600 cli_runner.go:164] Run: docker container inspect functional-973657 --format={{.State.Status}}
	I1222 00:22:15.991071 1440600 fix.go:112] recreateIfNeeded on functional-973657: state=Running err=<nil>
	W1222 00:22:15.991104 1440600 fix.go:138] unexpected machine state, will restart: <nil>
	I1222 00:22:15.994289 1440600 out.go:252] * Updating the running docker "functional-973657" container ...
	I1222 00:22:15.994325 1440600 machine.go:94] provisionDockerMachine start ...
	I1222 00:22:15.994407 1440600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
	I1222 00:22:16.016696 1440600 main.go:144] libmachine: Using SSH client type: native
	I1222 00:22:16.017052 1440600 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38390 <nil> <nil>}
	I1222 00:22:16.017069 1440600 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 00:22:16.150117 1440600 main.go:144] libmachine: SSH cmd err, output: <nil>: functional-973657
	
	I1222 00:22:16.150145 1440600 ubuntu.go:182] provisioning hostname "functional-973657"
	I1222 00:22:16.150214 1440600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
	I1222 00:22:16.171110 1440600 main.go:144] libmachine: Using SSH client type: native
	I1222 00:22:16.171503 1440600 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38390 <nil> <nil>}
	I1222 00:22:16.171525 1440600 main.go:144] libmachine: About to run SSH command:
	sudo hostname functional-973657 && echo "functional-973657" | sudo tee /etc/hostname
	I1222 00:22:16.320804 1440600 main.go:144] libmachine: SSH cmd err, output: <nil>: functional-973657
	
	I1222 00:22:16.320911 1440600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
	I1222 00:22:16.341102 1440600 main.go:144] libmachine: Using SSH client type: native
	I1222 00:22:16.341468 1440600 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38390 <nil> <nil>}
	I1222 00:22:16.341492 1440600 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-973657' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-973657/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-973657' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1222 00:22:16.474666 1440600 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 00:22:16.474761 1440600 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22179-1395000/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-1395000/.minikube}
	I1222 00:22:16.474804 1440600 ubuntu.go:190] setting up certificates
	I1222 00:22:16.474823 1440600 provision.go:84] configureAuth start
	I1222 00:22:16.474894 1440600 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-973657
	I1222 00:22:16.493393 1440600 provision.go:143] copyHostCerts
	I1222 00:22:16.493439 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.pem
	I1222 00:22:16.493474 1440600 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.pem, removing ...
	I1222 00:22:16.493495 1440600 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.pem
	I1222 00:22:16.493578 1440600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.pem (1082 bytes)
	I1222 00:22:16.493680 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22179-1395000/.minikube/cert.pem
	I1222 00:22:16.493704 1440600 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1395000/.minikube/cert.pem, removing ...
	I1222 00:22:16.493715 1440600 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1395000/.minikube/cert.pem
	I1222 00:22:16.493744 1440600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-1395000/.minikube/cert.pem (1123 bytes)
	I1222 00:22:16.493808 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22179-1395000/.minikube/key.pem
	I1222 00:22:16.493831 1440600 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1395000/.minikube/key.pem, removing ...
	I1222 00:22:16.493837 1440600 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1395000/.minikube/key.pem
	I1222 00:22:16.493863 1440600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-1395000/.minikube/key.pem (1679 bytes)
	I1222 00:22:16.493929 1440600 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca-key.pem org=jenkins.functional-973657 san=[127.0.0.1 192.168.49.2 functional-973657 localhost minikube]
	I1222 00:22:16.688332 1440600 provision.go:177] copyRemoteCerts
	I1222 00:22:16.688423 1440600 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1222 00:22:16.688474 1440600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
	I1222 00:22:16.708412 1440600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38390 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/functional-973657/id_rsa Username:docker}
	I1222 00:22:16.807036 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1222 00:22:16.807104 1440600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1222 00:22:16.826203 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1222 00:22:16.826269 1440600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1222 00:22:16.844818 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1222 00:22:16.844882 1440600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1222 00:22:16.862814 1440600 provision.go:87] duration metric: took 387.965654ms to configureAuth
	I1222 00:22:16.862846 1440600 ubuntu.go:206] setting minikube options for container-runtime
	I1222 00:22:16.863040 1440600 config.go:182] Loaded profile config "functional-973657": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1222 00:22:16.863055 1440600 machine.go:97] duration metric: took 868.721817ms to provisionDockerMachine
	I1222 00:22:16.863063 1440600 start.go:293] postStartSetup for "functional-973657" (driver="docker")
	I1222 00:22:16.863075 1440600 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1222 00:22:16.863140 1440600 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1222 00:22:16.863187 1440600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
	I1222 00:22:16.881215 1440600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38390 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/functional-973657/id_rsa Username:docker}
	I1222 00:22:16.978224 1440600 ssh_runner.go:195] Run: cat /etc/os-release
	I1222 00:22:16.981674 1440600 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1222 00:22:16.981697 1440600 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1222 00:22:16.981701 1440600 command_runner.go:130] > VERSION_ID="12"
	I1222 00:22:16.981706 1440600 command_runner.go:130] > VERSION="12 (bookworm)"
	I1222 00:22:16.981711 1440600 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1222 00:22:16.981715 1440600 command_runner.go:130] > ID=debian
	I1222 00:22:16.981720 1440600 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1222 00:22:16.981726 1440600 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1222 00:22:16.981732 1440600 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1222 00:22:16.981781 1440600 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1222 00:22:16.981805 1440600 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1222 00:22:16.981817 1440600 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1395000/.minikube/addons for local assets ...
	I1222 00:22:16.981874 1440600 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1395000/.minikube/files for local assets ...
	I1222 00:22:16.981966 1440600 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem -> 13968642.pem in /etc/ssl/certs
	I1222 00:22:16.981976 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem -> /etc/ssl/certs/13968642.pem
	I1222 00:22:16.982050 1440600 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/test/nested/copy/1396864/hosts -> hosts in /etc/test/nested/copy/1396864
	I1222 00:22:16.982058 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/test/nested/copy/1396864/hosts -> /etc/test/nested/copy/1396864/hosts
	I1222 00:22:16.982135 1440600 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1396864
	I1222 00:22:16.991499 1440600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem --> /etc/ssl/certs/13968642.pem (1708 bytes)
	I1222 00:22:17.014617 1440600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/test/nested/copy/1396864/hosts --> /etc/test/nested/copy/1396864/hosts (40 bytes)
	I1222 00:22:17.034289 1440600 start.go:296] duration metric: took 171.210875ms for postStartSetup
	I1222 00:22:17.034373 1440600 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 00:22:17.034421 1440600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
	I1222 00:22:17.055784 1440600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38390 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/functional-973657/id_rsa Username:docker}
	I1222 00:22:17.151461 1440600 command_runner.go:130] > 11%
	I1222 00:22:17.151551 1440600 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1222 00:22:17.156056 1440600 command_runner.go:130] > 174G
	I1222 00:22:17.156550 1440600 fix.go:56] duration metric: took 1.187112425s for fixHost
	I1222 00:22:17.156572 1440600 start.go:83] releasing machines lock for "functional-973657", held for 1.187162091s
	I1222 00:22:17.156642 1440600 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-973657
	I1222 00:22:17.174525 1440600 ssh_runner.go:195] Run: cat /version.json
	I1222 00:22:17.174589 1440600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
	I1222 00:22:17.174652 1440600 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1222 00:22:17.174714 1440600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
	I1222 00:22:17.196471 1440600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38390 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/functional-973657/id_rsa Username:docker}
	I1222 00:22:17.199230 1440600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38390 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/functional-973657/id_rsa Username:docker}
	I1222 00:22:17.379176 1440600 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1222 00:22:17.379235 1440600 command_runner.go:130] > {"iso_version": "v1.37.0-1765965980-22186", "kicbase_version": "v0.0.48-1766219634-22260", "minikube_version": "v1.37.0", "commit": "84997fca2a3b77f8e0b5b5ebeca663f85f924cfc"}
	I1222 00:22:17.379354 1440600 ssh_runner.go:195] Run: systemctl --version
	I1222 00:22:17.385410 1440600 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1222 00:22:17.385465 1440600 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1222 00:22:17.385880 1440600 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1222 00:22:17.390276 1440600 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1222 00:22:17.390418 1440600 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1222 00:22:17.390488 1440600 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1222 00:22:17.398542 1440600 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1222 00:22:17.398570 1440600 start.go:496] detecting cgroup driver to use...
	I1222 00:22:17.398621 1440600 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 00:22:17.398692 1440600 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1222 00:22:17.414048 1440600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1222 00:22:17.427185 1440600 docker.go:218] disabling cri-docker service (if available) ...
	I1222 00:22:17.427253 1440600 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1222 00:22:17.442685 1440600 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1222 00:22:17.455696 1440600 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1222 00:22:17.577927 1440600 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1222 00:22:17.693641 1440600 docker.go:234] disabling docker service ...
	I1222 00:22:17.693740 1440600 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1222 00:22:17.714854 1440600 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1222 00:22:17.729523 1440600 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1222 00:22:17.852439 1440600 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1222 00:22:17.963077 1440600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1222 00:22:17.977041 1440600 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 00:22:17.991276 1440600 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1222 00:22:17.992369 1440600 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1222 00:22:18.003034 1440600 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1222 00:22:18.019363 1440600 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1222 00:22:18.019441 1440600 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1222 00:22:18.030259 1440600 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1222 00:22:18.041222 1440600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1222 00:22:18.051429 1440600 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1222 00:22:18.060629 1440600 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1222 00:22:18.069455 1440600 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1222 00:22:18.079294 1440600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1222 00:22:18.088607 1440600 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1222 00:22:18.097955 1440600 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1222 00:22:18.105014 1440600 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1222 00:22:18.106002 1440600 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1222 00:22:18.114147 1440600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 00:22:18.224816 1440600 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1222 00:22:18.353040 1440600 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1222 00:22:18.353118 1440600 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1222 00:22:18.356934 1440600 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I1222 00:22:18.357009 1440600 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1222 00:22:18.357030 1440600 command_runner.go:130] > Device: 0,72	Inode: 1616        Links: 1
	I1222 00:22:18.357053 1440600 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1222 00:22:18.357086 1440600 command_runner.go:130] > Access: 2025-12-22 00:22:18.309838958 +0000
	I1222 00:22:18.357111 1440600 command_runner.go:130] > Modify: 2025-12-22 00:22:18.309838958 +0000
	I1222 00:22:18.357132 1440600 command_runner.go:130] > Change: 2025-12-22 00:22:18.309838958 +0000
	I1222 00:22:18.357178 1440600 command_runner.go:130] >  Birth: -
	I1222 00:22:18.357507 1440600 start.go:564] Will wait 60s for crictl version
	I1222 00:22:18.357612 1440600 ssh_runner.go:195] Run: which crictl
	I1222 00:22:18.361021 1440600 command_runner.go:130] > /usr/local/bin/crictl
	I1222 00:22:18.361396 1440600 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1222 00:22:18.384093 1440600 command_runner.go:130] > Version:  0.1.0
	I1222 00:22:18.384169 1440600 command_runner.go:130] > RuntimeName:  containerd
	I1222 00:22:18.384205 1440600 command_runner.go:130] > RuntimeVersion:  v2.2.1
	I1222 00:22:18.384240 1440600 command_runner.go:130] > RuntimeApiVersion:  v1
	I1222 00:22:18.386573 1440600 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.1
	RuntimeApiVersion:  v1
	I1222 00:22:18.386687 1440600 ssh_runner.go:195] Run: containerd --version
	I1222 00:22:18.407693 1440600 command_runner.go:130] > containerd containerd.io v2.2.1 dea7da592f5d1d2b7755e3a161be07f43fad8f75
	I1222 00:22:18.410154 1440600 ssh_runner.go:195] Run: containerd --version
	I1222 00:22:18.429567 1440600 command_runner.go:130] > containerd containerd.io v2.2.1 dea7da592f5d1d2b7755e3a161be07f43fad8f75
	I1222 00:22:18.437868 1440600 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.1 ...
	I1222 00:22:18.440703 1440600 cli_runner.go:164] Run: docker network inspect functional-973657 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 00:22:18.457963 1440600 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1222 00:22:18.462339 1440600 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1222 00:22:18.462457 1440600 kubeadm.go:884] updating cluster {Name:functional-973657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-973657 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1222 00:22:18.462560 1440600 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1222 00:22:18.462639 1440600 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 00:22:18.493006 1440600 command_runner.go:130] > {
	I1222 00:22:18.493026 1440600 command_runner.go:130] >   "images":  [
	I1222 00:22:18.493030 1440600 command_runner.go:130] >     {
	I1222 00:22:18.493040 1440600 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1222 00:22:18.493045 1440600 command_runner.go:130] >       "repoTags":  [
	I1222 00:22:18.493051 1440600 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1222 00:22:18.493055 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.493059 1440600 command_runner.go:130] >       "repoDigests":  [
	I1222 00:22:18.493072 1440600 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1222 00:22:18.493076 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.493081 1440600 command_runner.go:130] >       "size":  "40636774",
	I1222 00:22:18.493085 1440600 command_runner.go:130] >       "username":  "",
	I1222 00:22:18.493089 1440600 command_runner.go:130] >       "pinned":  false
	I1222 00:22:18.493092 1440600 command_runner.go:130] >     },
	I1222 00:22:18.493095 1440600 command_runner.go:130] >     {
	I1222 00:22:18.493102 1440600 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1222 00:22:18.493106 1440600 command_runner.go:130] >       "repoTags":  [
	I1222 00:22:18.493112 1440600 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1222 00:22:18.493116 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.493120 1440600 command_runner.go:130] >       "repoDigests":  [
	I1222 00:22:18.493128 1440600 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1222 00:22:18.493135 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.493139 1440600 command_runner.go:130] >       "size":  "8034419",
	I1222 00:22:18.493143 1440600 command_runner.go:130] >       "username":  "",
	I1222 00:22:18.493147 1440600 command_runner.go:130] >       "pinned":  false
	I1222 00:22:18.493150 1440600 command_runner.go:130] >     },
	I1222 00:22:18.493153 1440600 command_runner.go:130] >     {
	I1222 00:22:18.493162 1440600 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1222 00:22:18.493166 1440600 command_runner.go:130] >       "repoTags":  [
	I1222 00:22:18.493171 1440600 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1222 00:22:18.493178 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.493186 1440600 command_runner.go:130] >       "repoDigests":  [
	I1222 00:22:18.493194 1440600 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1222 00:22:18.493197 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.493201 1440600 command_runner.go:130] >       "size":  "21168808",
	I1222 00:22:18.493206 1440600 command_runner.go:130] >       "username":  "nonroot",
	I1222 00:22:18.493210 1440600 command_runner.go:130] >       "pinned":  false
	I1222 00:22:18.493213 1440600 command_runner.go:130] >     },
	I1222 00:22:18.493216 1440600 command_runner.go:130] >     {
	I1222 00:22:18.493223 1440600 command_runner.go:130] >       "id":  "sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57",
	I1222 00:22:18.493227 1440600 command_runner.go:130] >       "repoTags":  [
	I1222 00:22:18.493231 1440600 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.6-0"
	I1222 00:22:18.493235 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.493238 1440600 command_runner.go:130] >       "repoDigests":  [
	I1222 00:22:18.493246 1440600 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890"
	I1222 00:22:18.493249 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.493253 1440600 command_runner.go:130] >       "size":  "21749640",
	I1222 00:22:18.493258 1440600 command_runner.go:130] >       "uid":  {
	I1222 00:22:18.493261 1440600 command_runner.go:130] >         "value":  "0"
	I1222 00:22:18.493264 1440600 command_runner.go:130] >       },
	I1222 00:22:18.493268 1440600 command_runner.go:130] >       "username":  "",
	I1222 00:22:18.493271 1440600 command_runner.go:130] >       "pinned":  false
	I1222 00:22:18.493275 1440600 command_runner.go:130] >     },
	I1222 00:22:18.493278 1440600 command_runner.go:130] >     {
	I1222 00:22:18.493285 1440600 command_runner.go:130] >       "id":  "sha256:3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54",
	I1222 00:22:18.493289 1440600 command_runner.go:130] >       "repoTags":  [
	I1222 00:22:18.493294 1440600 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-rc.1"
	I1222 00:22:18.493297 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.493300 1440600 command_runner.go:130] >       "repoDigests":  [
	I1222 00:22:18.493308 1440600 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:58367b5c0428495c0c12411fa7a018f5d40fe57307b85d8935b1ed35706ff7ee"
	I1222 00:22:18.493311 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.493316 1440600 command_runner.go:130] >       "size":  "24692223",
	I1222 00:22:18.493319 1440600 command_runner.go:130] >       "uid":  {
	I1222 00:22:18.493335 1440600 command_runner.go:130] >         "value":  "0"
	I1222 00:22:18.493338 1440600 command_runner.go:130] >       },
	I1222 00:22:18.493342 1440600 command_runner.go:130] >       "username":  "",
	I1222 00:22:18.493346 1440600 command_runner.go:130] >       "pinned":  false
	I1222 00:22:18.493349 1440600 command_runner.go:130] >     },
	I1222 00:22:18.493352 1440600 command_runner.go:130] >     {
	I1222 00:22:18.493359 1440600 command_runner.go:130] >       "id":  "sha256:a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a",
	I1222 00:22:18.493362 1440600 command_runner.go:130] >       "repoTags":  [
	I1222 00:22:18.493368 1440600 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1"
	I1222 00:22:18.493371 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.493374 1440600 command_runner.go:130] >       "repoDigests":  [
	I1222 00:22:18.493383 1440600 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:57ab0f75f58d99f4be7bff7bdda015fcbf1b7c20e58ba2722c8c39f751dc8c98"
	I1222 00:22:18.493386 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.493389 1440600 command_runner.go:130] >       "size":  "20672157",
	I1222 00:22:18.493393 1440600 command_runner.go:130] >       "uid":  {
	I1222 00:22:18.493396 1440600 command_runner.go:130] >         "value":  "0"
	I1222 00:22:18.493399 1440600 command_runner.go:130] >       },
	I1222 00:22:18.493403 1440600 command_runner.go:130] >       "username":  "",
	I1222 00:22:18.493407 1440600 command_runner.go:130] >       "pinned":  false
	I1222 00:22:18.493410 1440600 command_runner.go:130] >     },
	I1222 00:22:18.493413 1440600 command_runner.go:130] >     {
	I1222 00:22:18.493420 1440600 command_runner.go:130] >       "id":  "sha256:7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e",
	I1222 00:22:18.493423 1440600 command_runner.go:130] >       "repoTags":  [
	I1222 00:22:18.493429 1440600 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-rc.1"
	I1222 00:22:18.493432 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.493435 1440600 command_runner.go:130] >       "repoDigests":  [
	I1222 00:22:18.493443 1440600 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bdd1fa8b53558a2e1967379a36b085c93faf15581e5fa9f212baf679d89c5bb5"
	I1222 00:22:18.493446 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.493450 1440600 command_runner.go:130] >       "size":  "22432301",
	I1222 00:22:18.493454 1440600 command_runner.go:130] >       "username":  "",
	I1222 00:22:18.493457 1440600 command_runner.go:130] >       "pinned":  false
	I1222 00:22:18.493460 1440600 command_runner.go:130] >     },
	I1222 00:22:18.493464 1440600 command_runner.go:130] >     {
	I1222 00:22:18.493475 1440600 command_runner.go:130] >       "id":  "sha256:abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde",
	I1222 00:22:18.493479 1440600 command_runner.go:130] >       "repoTags":  [
	I1222 00:22:18.493484 1440600 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-rc.1"
	I1222 00:22:18.493487 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.493491 1440600 command_runner.go:130] >       "repoDigests":  [
	I1222 00:22:18.493498 1440600 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8155e3db27c7081abfc8eb5da70820cfeaf0bba7449e45360e8220e670f417d3"
	I1222 00:22:18.493501 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.493505 1440600 command_runner.go:130] >       "size":  "15405535",
	I1222 00:22:18.493509 1440600 command_runner.go:130] >       "uid":  {
	I1222 00:22:18.493512 1440600 command_runner.go:130] >         "value":  "0"
	I1222 00:22:18.493516 1440600 command_runner.go:130] >       },
	I1222 00:22:18.493519 1440600 command_runner.go:130] >       "username":  "",
	I1222 00:22:18.493523 1440600 command_runner.go:130] >       "pinned":  false
	I1222 00:22:18.493526 1440600 command_runner.go:130] >     },
	I1222 00:22:18.493529 1440600 command_runner.go:130] >     {
	I1222 00:22:18.493536 1440600 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1222 00:22:18.493539 1440600 command_runner.go:130] >       "repoTags":  [
	I1222 00:22:18.493543 1440600 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1222 00:22:18.493547 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.493550 1440600 command_runner.go:130] >       "repoDigests":  [
	I1222 00:22:18.493557 1440600 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1222 00:22:18.493560 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.493564 1440600 command_runner.go:130] >       "size":  "267939",
	I1222 00:22:18.493568 1440600 command_runner.go:130] >       "uid":  {
	I1222 00:22:18.493571 1440600 command_runner.go:130] >         "value":  "65535"
	I1222 00:22:18.493575 1440600 command_runner.go:130] >       },
	I1222 00:22:18.493579 1440600 command_runner.go:130] >       "username":  "",
	I1222 00:22:18.493582 1440600 command_runner.go:130] >       "pinned":  true
	I1222 00:22:18.493585 1440600 command_runner.go:130] >     }
	I1222 00:22:18.493588 1440600 command_runner.go:130] >   ]
	I1222 00:22:18.493591 1440600 command_runner.go:130] > }
	I1222 00:22:18.493746 1440600 containerd.go:627] all images are preloaded for containerd runtime.
	I1222 00:22:18.493754 1440600 containerd.go:534] Images already preloaded, skipping extraction
	I1222 00:22:18.493814 1440600 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 00:22:18.517780 1440600 command_runner.go:130] > {
	I1222 00:22:18.517799 1440600 command_runner.go:130] >   "images":  [
	I1222 00:22:18.517803 1440600 command_runner.go:130] >     {
	I1222 00:22:18.517813 1440600 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1222 00:22:18.517818 1440600 command_runner.go:130] >       "repoTags":  [
	I1222 00:22:18.517824 1440600 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1222 00:22:18.517827 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.517831 1440600 command_runner.go:130] >       "repoDigests":  [
	I1222 00:22:18.517839 1440600 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1222 00:22:18.517843 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.517856 1440600 command_runner.go:130] >       "size":  "40636774",
	I1222 00:22:18.517861 1440600 command_runner.go:130] >       "username":  "",
	I1222 00:22:18.517865 1440600 command_runner.go:130] >       "pinned":  false
	I1222 00:22:18.517867 1440600 command_runner.go:130] >     },
	I1222 00:22:18.517870 1440600 command_runner.go:130] >     {
	I1222 00:22:18.517878 1440600 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1222 00:22:18.517882 1440600 command_runner.go:130] >       "repoTags":  [
	I1222 00:22:18.517887 1440600 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1222 00:22:18.517890 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.517894 1440600 command_runner.go:130] >       "repoDigests":  [
	I1222 00:22:18.517902 1440600 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1222 00:22:18.517906 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.517910 1440600 command_runner.go:130] >       "size":  "8034419",
	I1222 00:22:18.517913 1440600 command_runner.go:130] >       "username":  "",
	I1222 00:22:18.517917 1440600 command_runner.go:130] >       "pinned":  false
	I1222 00:22:18.517920 1440600 command_runner.go:130] >     },
	I1222 00:22:18.517923 1440600 command_runner.go:130] >     {
	I1222 00:22:18.517930 1440600 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1222 00:22:18.517934 1440600 command_runner.go:130] >       "repoTags":  [
	I1222 00:22:18.517939 1440600 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1222 00:22:18.517942 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.517947 1440600 command_runner.go:130] >       "repoDigests":  [
	I1222 00:22:18.517955 1440600 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1222 00:22:18.517958 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.517962 1440600 command_runner.go:130] >       "size":  "21168808",
	I1222 00:22:18.517966 1440600 command_runner.go:130] >       "username":  "nonroot",
	I1222 00:22:18.517970 1440600 command_runner.go:130] >       "pinned":  false
	I1222 00:22:18.517974 1440600 command_runner.go:130] >     },
	I1222 00:22:18.517977 1440600 command_runner.go:130] >     {
	I1222 00:22:18.517983 1440600 command_runner.go:130] >       "id":  "sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57",
	I1222 00:22:18.517987 1440600 command_runner.go:130] >       "repoTags":  [
	I1222 00:22:18.517992 1440600 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.6-0"
	I1222 00:22:18.517995 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.518002 1440600 command_runner.go:130] >       "repoDigests":  [
	I1222 00:22:18.518010 1440600 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890"
	I1222 00:22:18.518013 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.518017 1440600 command_runner.go:130] >       "size":  "21749640",
	I1222 00:22:18.518022 1440600 command_runner.go:130] >       "uid":  {
	I1222 00:22:18.518026 1440600 command_runner.go:130] >         "value":  "0"
	I1222 00:22:18.518029 1440600 command_runner.go:130] >       },
	I1222 00:22:18.518033 1440600 command_runner.go:130] >       "username":  "",
	I1222 00:22:18.518037 1440600 command_runner.go:130] >       "pinned":  false
	I1222 00:22:18.518041 1440600 command_runner.go:130] >     },
	I1222 00:22:18.518043 1440600 command_runner.go:130] >     {
	I1222 00:22:18.518050 1440600 command_runner.go:130] >       "id":  "sha256:3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54",
	I1222 00:22:18.518054 1440600 command_runner.go:130] >       "repoTags":  [
	I1222 00:22:18.518059 1440600 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-rc.1"
	I1222 00:22:18.518062 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.518066 1440600 command_runner.go:130] >       "repoDigests":  [
	I1222 00:22:18.518073 1440600 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:58367b5c0428495c0c12411fa7a018f5d40fe57307b85d8935b1ed35706ff7ee"
	I1222 00:22:18.518098 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.518103 1440600 command_runner.go:130] >       "size":  "24692223",
	I1222 00:22:18.518106 1440600 command_runner.go:130] >       "uid":  {
	I1222 00:22:18.518115 1440600 command_runner.go:130] >         "value":  "0"
	I1222 00:22:18.518118 1440600 command_runner.go:130] >       },
	I1222 00:22:18.518122 1440600 command_runner.go:130] >       "username":  "",
	I1222 00:22:18.518125 1440600 command_runner.go:130] >       "pinned":  false
	I1222 00:22:18.518128 1440600 command_runner.go:130] >     },
	I1222 00:22:18.518131 1440600 command_runner.go:130] >     {
	I1222 00:22:18.518142 1440600 command_runner.go:130] >       "id":  "sha256:a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a",
	I1222 00:22:18.518146 1440600 command_runner.go:130] >       "repoTags":  [
	I1222 00:22:18.518151 1440600 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1"
	I1222 00:22:18.518155 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.518158 1440600 command_runner.go:130] >       "repoDigests":  [
	I1222 00:22:18.518166 1440600 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:57ab0f75f58d99f4be7bff7bdda015fcbf1b7c20e58ba2722c8c39f751dc8c98"
	I1222 00:22:18.518170 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.518178 1440600 command_runner.go:130] >       "size":  "20672157",
	I1222 00:22:18.518182 1440600 command_runner.go:130] >       "uid":  {
	I1222 00:22:18.518185 1440600 command_runner.go:130] >         "value":  "0"
	I1222 00:22:18.518188 1440600 command_runner.go:130] >       },
	I1222 00:22:18.518192 1440600 command_runner.go:130] >       "username":  "",
	I1222 00:22:18.518195 1440600 command_runner.go:130] >       "pinned":  false
	I1222 00:22:18.518198 1440600 command_runner.go:130] >     },
	I1222 00:22:18.518202 1440600 command_runner.go:130] >     {
	I1222 00:22:18.518209 1440600 command_runner.go:130] >       "id":  "sha256:7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e",
	I1222 00:22:18.518212 1440600 command_runner.go:130] >       "repoTags":  [
	I1222 00:22:18.518217 1440600 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-rc.1"
	I1222 00:22:18.518220 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.518224 1440600 command_runner.go:130] >       "repoDigests":  [
	I1222 00:22:18.518231 1440600 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bdd1fa8b53558a2e1967379a36b085c93faf15581e5fa9f212baf679d89c5bb5"
	I1222 00:22:18.518235 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.518239 1440600 command_runner.go:130] >       "size":  "22432301",
	I1222 00:22:18.518242 1440600 command_runner.go:130] >       "username":  "",
	I1222 00:22:18.518246 1440600 command_runner.go:130] >       "pinned":  false
	I1222 00:22:18.518249 1440600 command_runner.go:130] >     },
	I1222 00:22:18.518253 1440600 command_runner.go:130] >     {
	I1222 00:22:18.518260 1440600 command_runner.go:130] >       "id":  "sha256:abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde",
	I1222 00:22:18.518264 1440600 command_runner.go:130] >       "repoTags":  [
	I1222 00:22:18.518269 1440600 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-rc.1"
	I1222 00:22:18.518273 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.518277 1440600 command_runner.go:130] >       "repoDigests":  [
	I1222 00:22:18.518285 1440600 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8155e3db27c7081abfc8eb5da70820cfeaf0bba7449e45360e8220e670f417d3"
	I1222 00:22:18.518288 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.518292 1440600 command_runner.go:130] >       "size":  "15405535",
	I1222 00:22:18.518295 1440600 command_runner.go:130] >       "uid":  {
	I1222 00:22:18.518299 1440600 command_runner.go:130] >         "value":  "0"
	I1222 00:22:18.518302 1440600 command_runner.go:130] >       },
	I1222 00:22:18.518306 1440600 command_runner.go:130] >       "username":  "",
	I1222 00:22:18.518310 1440600 command_runner.go:130] >       "pinned":  false
	I1222 00:22:18.518318 1440600 command_runner.go:130] >     },
	I1222 00:22:18.518322 1440600 command_runner.go:130] >     {
	I1222 00:22:18.518328 1440600 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1222 00:22:18.518332 1440600 command_runner.go:130] >       "repoTags":  [
	I1222 00:22:18.518337 1440600 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1222 00:22:18.518340 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.518344 1440600 command_runner.go:130] >       "repoDigests":  [
	I1222 00:22:18.518352 1440600 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1222 00:22:18.518355 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.518358 1440600 command_runner.go:130] >       "size":  "267939",
	I1222 00:22:18.518362 1440600 command_runner.go:130] >       "uid":  {
	I1222 00:22:18.518366 1440600 command_runner.go:130] >         "value":  "65535"
	I1222 00:22:18.518371 1440600 command_runner.go:130] >       },
	I1222 00:22:18.518375 1440600 command_runner.go:130] >       "username":  "",
	I1222 00:22:18.518379 1440600 command_runner.go:130] >       "pinned":  true
	I1222 00:22:18.518388 1440600 command_runner.go:130] >     }
	I1222 00:22:18.518391 1440600 command_runner.go:130] >   ]
	I1222 00:22:18.518397 1440600 command_runner.go:130] > }
	I1222 00:22:18.524524 1440600 containerd.go:627] all images are preloaded for containerd runtime.
	I1222 00:22:18.524599 1440600 cache_images.go:86] Images are preloaded, skipping loading
	I1222 00:22:18.524620 1440600 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 containerd true true} ...
	I1222 00:22:18.524759 1440600 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-973657 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-973657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1222 00:22:18.524857 1440600 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
	I1222 00:22:18.549454 1440600 command_runner.go:130] > {
	I1222 00:22:18.549479 1440600 command_runner.go:130] >   "cniconfig": {
	I1222 00:22:18.549486 1440600 command_runner.go:130] >     "Networks": [
	I1222 00:22:18.549489 1440600 command_runner.go:130] >       {
	I1222 00:22:18.549495 1440600 command_runner.go:130] >         "Config": {
	I1222 00:22:18.549500 1440600 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I1222 00:22:18.549519 1440600 command_runner.go:130] >           "Name": "cni-loopback",
	I1222 00:22:18.549527 1440600 command_runner.go:130] >           "Plugins": [
	I1222 00:22:18.549530 1440600 command_runner.go:130] >             {
	I1222 00:22:18.549541 1440600 command_runner.go:130] >               "Network": {
	I1222 00:22:18.549546 1440600 command_runner.go:130] >                 "ipam": {},
	I1222 00:22:18.549551 1440600 command_runner.go:130] >                 "type": "loopback"
	I1222 00:22:18.549560 1440600 command_runner.go:130] >               },
	I1222 00:22:18.549566 1440600 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I1222 00:22:18.549570 1440600 command_runner.go:130] >             }
	I1222 00:22:18.549579 1440600 command_runner.go:130] >           ],
	I1222 00:22:18.549590 1440600 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I1222 00:22:18.549604 1440600 command_runner.go:130] >         },
	I1222 00:22:18.549612 1440600 command_runner.go:130] >         "IFName": "lo"
	I1222 00:22:18.549615 1440600 command_runner.go:130] >       }
	I1222 00:22:18.549619 1440600 command_runner.go:130] >     ],
	I1222 00:22:18.549626 1440600 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I1222 00:22:18.549636 1440600 command_runner.go:130] >     "PluginDirs": [
	I1222 00:22:18.549640 1440600 command_runner.go:130] >       "/opt/cni/bin"
	I1222 00:22:18.549643 1440600 command_runner.go:130] >     ],
	I1222 00:22:18.549648 1440600 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I1222 00:22:18.549656 1440600 command_runner.go:130] >     "Prefix": "eth"
	I1222 00:22:18.549667 1440600 command_runner.go:130] >   },
	I1222 00:22:18.549674 1440600 command_runner.go:130] >   "config": {
	I1222 00:22:18.549678 1440600 command_runner.go:130] >     "cdiSpecDirs": [
	I1222 00:22:18.549682 1440600 command_runner.go:130] >       "/etc/cdi",
	I1222 00:22:18.549687 1440600 command_runner.go:130] >       "/var/run/cdi"
	I1222 00:22:18.549691 1440600 command_runner.go:130] >     ],
	I1222 00:22:18.549695 1440600 command_runner.go:130] >     "cni": {
	I1222 00:22:18.549698 1440600 command_runner.go:130] >       "binDir": "",
	I1222 00:22:18.549702 1440600 command_runner.go:130] >       "binDirs": [
	I1222 00:22:18.549706 1440600 command_runner.go:130] >         "/opt/cni/bin"
	I1222 00:22:18.549709 1440600 command_runner.go:130] >       ],
	I1222 00:22:18.549713 1440600 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I1222 00:22:18.549717 1440600 command_runner.go:130] >       "confTemplate": "",
	I1222 00:22:18.549720 1440600 command_runner.go:130] >       "ipPref": "",
	I1222 00:22:18.549728 1440600 command_runner.go:130] >       "maxConfNum": 1,
	I1222 00:22:18.549732 1440600 command_runner.go:130] >       "setupSerially": false,
	I1222 00:22:18.549739 1440600 command_runner.go:130] >       "useInternalLoopback": false
	I1222 00:22:18.549748 1440600 command_runner.go:130] >     },
	I1222 00:22:18.549754 1440600 command_runner.go:130] >     "containerd": {
	I1222 00:22:18.549759 1440600 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I1222 00:22:18.549768 1440600 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I1222 00:22:18.549773 1440600 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false,
	I1222 00:22:18.549777 1440600 command_runner.go:130] >       "runtimes": {
	I1222 00:22:18.549781 1440600 command_runner.go:130] >         "runc": {
	I1222 00:22:18.549786 1440600 command_runner.go:130] >           "ContainerAnnotations": null,
	I1222 00:22:18.549795 1440600 command_runner.go:130] >           "PodAnnotations": null,
	I1222 00:22:18.549799 1440600 command_runner.go:130] >           "baseRuntimeSpec": "",
	I1222 00:22:18.549803 1440600 command_runner.go:130] >           "cgroupWritable": false,
	I1222 00:22:18.549808 1440600 command_runner.go:130] >           "cniConfDir": "",
	I1222 00:22:18.549816 1440600 command_runner.go:130] >           "cniMaxConfNum": 0,
	I1222 00:22:18.549825 1440600 command_runner.go:130] >           "io_type": "",
	I1222 00:22:18.549829 1440600 command_runner.go:130] >           "options": {
	I1222 00:22:18.549834 1440600 command_runner.go:130] >             "BinaryName": "",
	I1222 00:22:18.549841 1440600 command_runner.go:130] >             "CriuImagePath": "",
	I1222 00:22:18.549847 1440600 command_runner.go:130] >             "CriuWorkPath": "",
	I1222 00:22:18.549851 1440600 command_runner.go:130] >             "IoGid": 0,
	I1222 00:22:18.549860 1440600 command_runner.go:130] >             "IoUid": 0,
	I1222 00:22:18.549864 1440600 command_runner.go:130] >             "NoNewKeyring": false,
	I1222 00:22:18.549869 1440600 command_runner.go:130] >             "Root": "",
	I1222 00:22:18.549874 1440600 command_runner.go:130] >             "ShimCgroup": "",
	I1222 00:22:18.549883 1440600 command_runner.go:130] >             "SystemdCgroup": false
	I1222 00:22:18.549890 1440600 command_runner.go:130] >           },
	I1222 00:22:18.549896 1440600 command_runner.go:130] >           "privileged_without_host_devices": false,
	I1222 00:22:18.549907 1440600 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I1222 00:22:18.549911 1440600 command_runner.go:130] >           "runtimePath": "",
	I1222 00:22:18.549916 1440600 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I1222 00:22:18.549920 1440600 command_runner.go:130] >           "sandboxer": "podsandbox",
	I1222 00:22:18.549924 1440600 command_runner.go:130] >           "snapshotter": ""
	I1222 00:22:18.549928 1440600 command_runner.go:130] >         }
	I1222 00:22:18.549931 1440600 command_runner.go:130] >       }
	I1222 00:22:18.549934 1440600 command_runner.go:130] >     },
	I1222 00:22:18.549944 1440600 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I1222 00:22:18.549953 1440600 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I1222 00:22:18.549961 1440600 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I1222 00:22:18.549965 1440600 command_runner.go:130] >     "disableApparmor": false,
	I1222 00:22:18.549970 1440600 command_runner.go:130] >     "disableHugetlbController": true,
	I1222 00:22:18.549978 1440600 command_runner.go:130] >     "disableProcMount": false,
	I1222 00:22:18.549983 1440600 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I1222 00:22:18.549987 1440600 command_runner.go:130] >     "enableCDI": true,
	I1222 00:22:18.549991 1440600 command_runner.go:130] >     "enableSelinux": false,
	I1222 00:22:18.549996 1440600 command_runner.go:130] >     "enableUnprivilegedICMP": true,
	I1222 00:22:18.550004 1440600 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I1222 00:22:18.550010 1440600 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I1222 00:22:18.550015 1440600 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I1222 00:22:18.550019 1440600 command_runner.go:130] >     "maxContainerLogLineSize": 16384,
	I1222 00:22:18.550024 1440600 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I1222 00:22:18.550035 1440600 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I1222 00:22:18.550046 1440600 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I1222 00:22:18.550051 1440600 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I1222 00:22:18.550059 1440600 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri",
	I1222 00:22:18.550068 1440600 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I1222 00:22:18.550072 1440600 command_runner.go:130] >     "unsetSeccompProfile": ""
	I1222 00:22:18.550165 1440600 command_runner.go:130] >   },
	I1222 00:22:18.550176 1440600 command_runner.go:130] >   "features": {
	I1222 00:22:18.550180 1440600 command_runner.go:130] >     "supplemental_groups_policy": true
	I1222 00:22:18.550184 1440600 command_runner.go:130] >   },
	I1222 00:22:18.550188 1440600 command_runner.go:130] >   "golang": "go1.24.11",
	I1222 00:22:18.550201 1440600 command_runner.go:130] >   "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1222 00:22:18.550222 1440600 command_runner.go:130] >   "lastCNILoadStatus.default": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1222 00:22:18.550231 1440600 command_runner.go:130] >   "runtimeHandlers": [
	I1222 00:22:18.550234 1440600 command_runner.go:130] >     {
	I1222 00:22:18.550238 1440600 command_runner.go:130] >       "features": {
	I1222 00:22:18.550243 1440600 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1222 00:22:18.550253 1440600 command_runner.go:130] >         "user_namespaces": true
	I1222 00:22:18.550257 1440600 command_runner.go:130] >       }
	I1222 00:22:18.550260 1440600 command_runner.go:130] >     },
	I1222 00:22:18.550264 1440600 command_runner.go:130] >     {
	I1222 00:22:18.550268 1440600 command_runner.go:130] >       "features": {
	I1222 00:22:18.550272 1440600 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1222 00:22:18.550277 1440600 command_runner.go:130] >         "user_namespaces": true
	I1222 00:22:18.550282 1440600 command_runner.go:130] >       },
	I1222 00:22:18.550286 1440600 command_runner.go:130] >       "name": "runc"
	I1222 00:22:18.550290 1440600 command_runner.go:130] >     }
	I1222 00:22:18.550293 1440600 command_runner.go:130] >   ],
	I1222 00:22:18.550296 1440600 command_runner.go:130] >   "status": {
	I1222 00:22:18.550302 1440600 command_runner.go:130] >     "conditions": [
	I1222 00:22:18.550305 1440600 command_runner.go:130] >       {
	I1222 00:22:18.550315 1440600 command_runner.go:130] >         "message": "",
	I1222 00:22:18.550319 1440600 command_runner.go:130] >         "reason": "",
	I1222 00:22:18.550327 1440600 command_runner.go:130] >         "status": true,
	I1222 00:22:18.550337 1440600 command_runner.go:130] >         "type": "RuntimeReady"
	I1222 00:22:18.550341 1440600 command_runner.go:130] >       },
	I1222 00:22:18.550344 1440600 command_runner.go:130] >       {
	I1222 00:22:18.550352 1440600 command_runner.go:130] >         "message": "Network plugin returns error: cni plugin not initialized",
	I1222 00:22:18.550360 1440600 command_runner.go:130] >         "reason": "NetworkPluginNotReady",
	I1222 00:22:18.550365 1440600 command_runner.go:130] >         "status": false,
	I1222 00:22:18.550369 1440600 command_runner.go:130] >         "type": "NetworkReady"
	I1222 00:22:18.550373 1440600 command_runner.go:130] >       },
	I1222 00:22:18.550375 1440600 command_runner.go:130] >       {
	I1222 00:22:18.550400 1440600 command_runner.go:130] >         "message": "{\"io.containerd.deprecation/cgroup-v1\":\"The support for cgroup v1 is deprecated since containerd v2.2 and will be removed by no later than May 2029. Upgrade the host to use cgroup v2.\"}",
	I1222 00:22:18.550411 1440600 command_runner.go:130] >         "reason": "ContainerdHasDeprecationWarnings",
	I1222 00:22:18.550417 1440600 command_runner.go:130] >         "status": false,
	I1222 00:22:18.550423 1440600 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings"
	I1222 00:22:18.550427 1440600 command_runner.go:130] >       }
	I1222 00:22:18.550430 1440600 command_runner.go:130] >     ]
	I1222 00:22:18.550433 1440600 command_runner.go:130] >   }
	I1222 00:22:18.550437 1440600 command_runner.go:130] > }
	I1222 00:22:18.553215 1440600 cni.go:84] Creating CNI manager for ""
	I1222 00:22:18.553243 1440600 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1222 00:22:18.553264 1440600 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1222 00:22:18.553287 1440600 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-973657 NodeName:functional-973657 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1222 00:22:18.553412 1440600 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-973657"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1222 00:22:18.553487 1440600 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1222 00:22:18.562348 1440600 command_runner.go:130] > kubeadm
	I1222 00:22:18.562392 1440600 command_runner.go:130] > kubectl
	I1222 00:22:18.562397 1440600 command_runner.go:130] > kubelet
	I1222 00:22:18.563648 1440600 binaries.go:51] Found k8s binaries, skipping transfer
	I1222 00:22:18.563729 1440600 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1222 00:22:18.571505 1440600 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1222 00:22:18.584676 1440600 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1222 00:22:18.597236 1440600 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1222 00:22:18.610841 1440600 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1222 00:22:18.614244 1440600 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1222 00:22:18.614541 1440600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 00:22:18.726610 1440600 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 00:22:19.239399 1440600 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657 for IP: 192.168.49.2
	I1222 00:22:19.239420 1440600 certs.go:195] generating shared ca certs ...
	I1222 00:22:19.239437 1440600 certs.go:227] acquiring lock for ca certs: {Name:mk4e1172e73c8d9b926824a39d7e920772302ed7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:22:19.239601 1440600 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.key
	I1222 00:22:19.239659 1440600 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.key
	I1222 00:22:19.239667 1440600 certs.go:257] generating profile certs ...
	I1222 00:22:19.239794 1440600 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/client.key
	I1222 00:22:19.239853 1440600 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/apiserver.key.ec70d081
	I1222 00:22:19.239904 1440600 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/proxy-client.key
	I1222 00:22:19.239913 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1222 00:22:19.239940 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1222 00:22:19.239954 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1222 00:22:19.239964 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1222 00:22:19.239974 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1222 00:22:19.239986 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1222 00:22:19.239996 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1222 00:22:19.240015 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1222 00:22:19.240069 1440600 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/1396864.pem (1338 bytes)
	W1222 00:22:19.240100 1440600 certs.go:480] ignoring /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/1396864_empty.pem, impossibly tiny 0 bytes
	I1222 00:22:19.240108 1440600 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca-key.pem (1675 bytes)
	I1222 00:22:19.240138 1440600 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem (1082 bytes)
	I1222 00:22:19.240165 1440600 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem (1123 bytes)
	I1222 00:22:19.240227 1440600 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/key.pem (1679 bytes)
	I1222 00:22:19.240279 1440600 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem (1708 bytes)
	I1222 00:22:19.240316 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem -> /usr/share/ca-certificates/13968642.pem
	I1222 00:22:19.240338 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:22:19.240354 1440600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/1396864.pem -> /usr/share/ca-certificates/1396864.pem
	I1222 00:22:19.240935 1440600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1222 00:22:19.264800 1440600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1222 00:22:19.285797 1440600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1222 00:22:19.306670 1440600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1222 00:22:19.326432 1440600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1222 00:22:19.345177 1440600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1222 00:22:19.365354 1440600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1222 00:22:19.385285 1440600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1222 00:22:19.406674 1440600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem --> /usr/share/ca-certificates/13968642.pem (1708 bytes)
	I1222 00:22:19.425094 1440600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1222 00:22:19.443464 1440600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/1396864.pem --> /usr/share/ca-certificates/1396864.pem (1338 bytes)
	I1222 00:22:19.461417 1440600 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1222 00:22:19.474356 1440600 ssh_runner.go:195] Run: openssl version
	I1222 00:22:19.480426 1440600 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1222 00:22:19.480764 1440600 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/13968642.pem
	I1222 00:22:19.488508 1440600 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/13968642.pem /etc/ssl/certs/13968642.pem
	I1222 00:22:19.496491 1440600 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13968642.pem
	I1222 00:22:19.500580 1440600 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 22 00:13 /usr/share/ca-certificates/13968642.pem
	I1222 00:22:19.500632 1440600 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 00:13 /usr/share/ca-certificates/13968642.pem
	I1222 00:22:19.500692 1440600 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13968642.pem
	I1222 00:22:19.542795 1440600 command_runner.go:130] > 3ec20f2e
	I1222 00:22:19.543311 1440600 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1222 00:22:19.550778 1440600 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:22:19.558196 1440600 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1222 00:22:19.566111 1440600 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:22:19.570217 1440600 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 22 00:04 /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:22:19.570294 1440600 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 00:04 /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:22:19.570384 1440600 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:22:19.611673 1440600 command_runner.go:130] > b5213941
	I1222 00:22:19.612225 1440600 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1222 00:22:19.620704 1440600 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1396864.pem
	I1222 00:22:19.628264 1440600 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1396864.pem /etc/ssl/certs/1396864.pem
	I1222 00:22:19.635997 1440600 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1396864.pem
	I1222 00:22:19.639846 1440600 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 22 00:13 /usr/share/ca-certificates/1396864.pem
	I1222 00:22:19.640210 1440600 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 00:13 /usr/share/ca-certificates/1396864.pem
	I1222 00:22:19.640329 1440600 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1396864.pem
	I1222 00:22:19.681144 1440600 command_runner.go:130] > 51391683
	I1222 00:22:19.681670 1440600 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1222 00:22:19.689290 1440600 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 00:22:19.693035 1440600 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 00:22:19.693063 1440600 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1222 00:22:19.693070 1440600 command_runner.go:130] > Device: 259,1	Inode: 3898609     Links: 1
	I1222 00:22:19.693078 1440600 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1222 00:22:19.693115 1440600 command_runner.go:130] > Access: 2025-12-22 00:18:12.483760857 +0000
	I1222 00:22:19.693127 1440600 command_runner.go:130] > Modify: 2025-12-22 00:14:07.469855514 +0000
	I1222 00:22:19.693132 1440600 command_runner.go:130] > Change: 2025-12-22 00:14:07.469855514 +0000
	I1222 00:22:19.693137 1440600 command_runner.go:130] >  Birth: 2025-12-22 00:14:07.469855514 +0000
	I1222 00:22:19.693272 1440600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1222 00:22:19.733914 1440600 command_runner.go:130] > Certificate will not expire
	I1222 00:22:19.734424 1440600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1222 00:22:19.775247 1440600 command_runner.go:130] > Certificate will not expire
	I1222 00:22:19.775751 1440600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1222 00:22:19.816615 1440600 command_runner.go:130] > Certificate will not expire
	I1222 00:22:19.817124 1440600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1222 00:22:19.858237 1440600 command_runner.go:130] > Certificate will not expire
	I1222 00:22:19.858742 1440600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1222 00:22:19.899966 1440600 command_runner.go:130] > Certificate will not expire
	I1222 00:22:19.900073 1440600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1222 00:22:19.941050 1440600 command_runner.go:130] > Certificate will not expire
	I1222 00:22:19.941558 1440600 kubeadm.go:401] StartCluster: {Name:functional-973657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-973657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 00:22:19.941671 1440600 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1222 00:22:19.941755 1440600 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 00:22:19.969312 1440600 cri.go:96] found id: ""
	I1222 00:22:19.969401 1440600 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1222 00:22:19.976791 1440600 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1222 00:22:19.976817 1440600 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1222 00:22:19.976825 1440600 command_runner.go:130] > /var/lib/minikube/etcd:
	I1222 00:22:19.977852 1440600 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1222 00:22:19.977869 1440600 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1222 00:22:19.977970 1440600 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1222 00:22:19.987953 1440600 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1222 00:22:19.988422 1440600 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-973657" does not appear in /home/jenkins/minikube-integration/22179-1395000/kubeconfig
	I1222 00:22:19.988584 1440600 kubeconfig.go:62] /home/jenkins/minikube-integration/22179-1395000/kubeconfig needs updating (will repair): [kubeconfig missing "functional-973657" cluster setting kubeconfig missing "functional-973657" context setting]
	I1222 00:22:19.988906 1440600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/kubeconfig: {Name:mk08a9ce05fb3d2aec66c2d4776a38c1f64c42df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:22:19.989373 1440600 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22179-1395000/kubeconfig
	I1222 00:22:19.989570 1440600 kapi.go:59] client config for functional-973657: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/client.crt", KeyFile:"/home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/client.key", CAFile:"/home/jenkins/minikube-integration/22179-1395000/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2001100), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1222 00:22:19.990226 1440600 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1222 00:22:19.990386 1440600 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1222 00:22:19.990501 1440600 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1222 00:22:19.990531 1440600 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1222 00:22:19.990563 1440600 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1222 00:22:19.990584 1440600 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1222 00:22:19.990915 1440600 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1222 00:22:19.999837 1440600 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1222 00:22:19.999916 1440600 kubeadm.go:602] duration metric: took 22.040118ms to restartPrimaryControlPlane
	I1222 00:22:19.999943 1440600 kubeadm.go:403] duration metric: took 58.401328ms to StartCluster
	I1222 00:22:19.999973 1440600 settings.go:142] acquiring lock: {Name:mk10fbc51e5384d725fd46ab36048358281cb87b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:22:20.000060 1440600 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22179-1395000/kubeconfig
	I1222 00:22:20.000818 1440600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/kubeconfig: {Name:mk08a9ce05fb3d2aec66c2d4776a38c1f64c42df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:22:20.001160 1440600 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1222 00:22:20.001573 1440600 config.go:182] Loaded profile config "functional-973657": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1222 00:22:20.001632 1440600 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1222 00:22:20.001706 1440600 addons.go:70] Setting storage-provisioner=true in profile "functional-973657"
	I1222 00:22:20.001719 1440600 addons.go:239] Setting addon storage-provisioner=true in "functional-973657"
	I1222 00:22:20.001742 1440600 host.go:66] Checking if "functional-973657" exists ...
	I1222 00:22:20.002272 1440600 cli_runner.go:164] Run: docker container inspect functional-973657 --format={{.State.Status}}
	I1222 00:22:20.005335 1440600 addons.go:70] Setting default-storageclass=true in profile "functional-973657"
	I1222 00:22:20.005371 1440600 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-973657"
	I1222 00:22:20.005777 1440600 cli_runner.go:164] Run: docker container inspect functional-973657 --format={{.State.Status}}
	I1222 00:22:20.009418 1440600 out.go:179] * Verifying Kubernetes components...
	I1222 00:22:20.018228 1440600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 00:22:20.049014 1440600 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1222 00:22:20.054188 1440600 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:22:20.054214 1440600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1222 00:22:20.054285 1440600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
	I1222 00:22:20.057022 1440600 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22179-1395000/kubeconfig
	I1222 00:22:20.057199 1440600 kapi.go:59] client config for functional-973657: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/client.crt", KeyFile:"/home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/client.key", CAFile:"/home/jenkins/minikube-integration/22179-1395000/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2001100), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1222 00:22:20.057484 1440600 addons.go:239] Setting addon default-storageclass=true in "functional-973657"
	I1222 00:22:20.057515 1440600 host.go:66] Checking if "functional-973657" exists ...
	I1222 00:22:20.057932 1440600 cli_runner.go:164] Run: docker container inspect functional-973657 --format={{.State.Status}}
	I1222 00:22:20.116105 1440600 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1222 00:22:20.116126 1440600 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1222 00:22:20.116211 1440600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
	I1222 00:22:20.118476 1440600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38390 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/functional-973657/id_rsa Username:docker}
	I1222 00:22:20.150964 1440600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38390 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/functional-973657/id_rsa Username:docker}
	I1222 00:22:20.230950 1440600 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 00:22:20.246813 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:22:20.269038 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:22:20.989713 1440600 node_ready.go:35] waiting up to 6m0s for node "functional-973657" to be "Ready" ...
	I1222 00:22:20.989868 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:20.989910 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:20.989956 1440600 retry.go:84] will retry after 300ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:20.990019 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:20.990037 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:20.990158 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:20.990237 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:20.990539 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:21.220129 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:22:21.281805 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:21.285548 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:21.328766 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:22:21.389895 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:21.389951 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:21.490214 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:21.490305 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:21.490671 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:21.747162 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:22:21.762982 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:22:21.851794 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:21.851892 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:21.874934 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:21.874990 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:21.990352 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:21.990483 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:21.990846 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:22.169304 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:22:22.227981 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:22.231304 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 00:22:22.232314 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:22.293066 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:22.293113 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:22.490400 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:22.490500 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:22.490834 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:22.906334 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:22:22.975672 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:22.975713 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:22.990847 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:22.990933 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:22.991200 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:22:22.991243 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:22:23.106669 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:22:23.165342 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:23.165389 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:23.490828 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:23.490919 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:23.491242 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:23.690784 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:22:23.756600 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:23.760540 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:23.989979 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:23.990075 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:23.990401 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:24.489993 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:24.490100 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:24.490454 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:24.698734 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:22:24.769684 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:24.773516 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:24.989931 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:24.990002 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:24.990372 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:22:24.991642 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:22:25.485320 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:22:25.490209 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:25.490301 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:25.490614 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:25.576354 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:25.576402 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:25.989992 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:25.990101 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:25.990409 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:26.023839 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:22:26.088004 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:26.088050 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:26.490597 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:26.490669 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:26.491019 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:26.990635 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:26.990716 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:26.991074 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:27.490758 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:27.490828 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:27.491160 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:22:27.491213 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:22:27.990564 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:27.990642 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:27.991013 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:28.490658 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:28.490747 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:28.491022 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:28.831344 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:22:28.887027 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:28.890561 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:28.990850 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:28.990934 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:28.991236 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:29.310761 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:22:29.372391 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:29.372466 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:29.490719 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:29.490793 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:29.491132 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:29.989857 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:29.989931 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:29.990237 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:22:29.990280 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:22:30.489971 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:30.490047 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:30.490418 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:30.990341 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:30.990414 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:30.990750 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:31.490503 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:31.490609 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:31.490891 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:31.990673 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:31.990771 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:31.991094 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:22:31.991143 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:22:32.490784 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:32.490857 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:32.491147 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:32.990889 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:32.990957 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:32.991275 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:33.490908 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:33.490983 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:33.491308 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:33.989973 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:33.990048 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:33.990429 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:34.489922 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:34.490003 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:34.490315 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:22:34.490363 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:22:34.729902 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:22:34.785155 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:34.788865 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:34.788903 1440600 retry.go:84] will retry after 5s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:34.990007 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:34.990103 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:34.990461 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:35.490036 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:35.490130 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:35.490475 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:35.603941 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:22:35.664634 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:35.664674 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:35.990278 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:35.990353 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:35.990620 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:36.489989 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:36.490117 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:36.490457 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:22:36.490508 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:22:36.990183 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:36.990309 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:36.990632 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:37.490175 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:37.490265 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:37.490582 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:37.990291 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:37.990369 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:37.990755 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:38.490593 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:38.490669 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:38.491023 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:22:38.491077 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:22:38.990843 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:38.990915 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:38.991302 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:39.489984 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:39.490057 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:39.490378 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:39.827913 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:22:39.886956 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:39.887007 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:39.887042 1440600 retry.go:84] will retry after 5.9s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:39.990214 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:39.990290 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:39.990608 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:40.490186 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:40.490272 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:40.490611 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:40.989992 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:40.990105 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:40.990426 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:22:40.990478 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:22:41.489998 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:41.490103 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:41.490430 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:41.990002 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:41.990092 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:41.990399 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:42.490105 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:42.490198 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:42.490519 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:42.989979 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:42.990061 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:42.990415 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:43.490003 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:43.490101 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:43.490417 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:22:43.490468 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:22:43.989976 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:43.990053 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:43.990428 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:44.490009 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:44.490103 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:44.490462 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:44.689826 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:22:44.747699 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:44.751320 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:44.751358 1440600 retry.go:84] will retry after 11.8s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:44.990747 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:44.990828 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:44.991101 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:45.490873 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:45.491004 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:45.491354 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:22:45.491411 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:22:45.742662 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:22:45.802582 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:45.802622 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:45.990273 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:45.990345 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:45.990615 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:46.489977 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:46.490053 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:46.490376 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:46.990196 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:46.990269 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:46.990588 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:47.490213 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:47.490295 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:47.490626 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:47.990032 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:47.990136 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:47.990574 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:22:47.990636 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:22:48.490291 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:48.490369 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:48.490704 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:48.990450 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:48.990547 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:48.990893 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:49.490743 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:49.490839 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:49.491164 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:49.989920 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:49.990005 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:49.990408 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:50.490031 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:50.490126 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:50.490389 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:22:50.490443 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:22:50.990489 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:50.990566 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:50.990936 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:51.490631 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:51.490764 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:51.491053 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:51.989876 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:51.989962 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:51.990268 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:52.489968 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:52.490042 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:52.490371 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:52.990110 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:52.990196 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:52.990515 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:22:52.990572 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:22:53.490196 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:53.490268 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:53.490563 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:53.990000 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:53.990092 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:53.990415 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:54.490001 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:54.490098 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:54.490420 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:54.989928 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:54.990024 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:54.990327 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:55.234826 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:22:55.295545 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:55.295592 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:55.295617 1440600 retry.go:84] will retry after 23.4s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:55.490907 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:55.490991 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:55.491326 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:22:55.491397 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:22:55.990270 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:55.990351 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:55.990713 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:56.490418 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:56.490484 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:56.490747 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:56.590202 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:22:56.649053 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:22:56.649099 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:22:56.990592 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:56.990671 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:56.991011 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:57.490852 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:57.490928 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:57.491243 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:57.989908 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:57.990004 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:57.990332 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:22:57.990391 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:22:58.489983 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:58.490098 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:58.490492 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:58.990013 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:58.990106 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:58.990483 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:59.490202 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:59.490276 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:59.490574 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:22:59.989956 1440600 type.go:165] "Request Body" body=""
	I1222 00:22:59.990059 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:22:59.990371 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:22:59.990417 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:00.489992 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:00.490099 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:00.490433 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:00.990353 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:00.990422 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:00.990690 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:01.490534 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:01.490615 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:01.490968 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:01.990810 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:01.990893 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:01.991247 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:01.991307 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:02.489934 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:02.490020 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:02.490365 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:02.989981 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:02.990056 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:02.990424 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:03.490057 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:03.490160 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:03.490510 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:03.990186 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:03.990261 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:03.990567 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:04.489961 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:04.490056 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:04.490388 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:04.490445 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:04.989990 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:04.990063 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:04.990444 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:05.490618 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:05.490690 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:05.491034 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:05.990715 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:05.990804 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:05.991174 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:06.490848 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:06.490927 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:06.491264 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:06.491323 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:06.989959 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:06.990038 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:06.990349 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:07.490001 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:07.490094 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:07.490420 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:07.990138 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:07.990214 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:07.990557 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:08.490207 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:08.490297 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:08.490641 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:08.990354 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:08.990451 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:08.990812 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:08.990863 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:09.490648 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:09.490727 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:09.491063 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:09.990673 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:09.990741 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:09.991016 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:10.490839 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:10.490917 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:10.491255 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:10.990113 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:10.990187 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:10.990542 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:11.490194 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:11.490275 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:11.490617 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:11.490670 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:11.989985 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:11.990065 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:11.990445 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:12.490175 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:12.490250 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:12.490589 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:12.990181 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:12.990252 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:12.990513 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:13.489972 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:13.490070 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:13.490414 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:13.989959 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:13.990036 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:13.990393 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:13.990452 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:14.490128 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:14.490203 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:14.490524 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:14.989972 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:14.990055 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:14.990413 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:15.490003 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:15.490098 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:15.490439 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:15.990388 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:15.990464 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:15.990804 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:15.990869 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:16.088253 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:23:16.150952 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:23:16.151001 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:23:16.151025 1440600 retry.go:84] will retry after 12s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:23:16.490464 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:16.490538 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:16.490881 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:16.990721 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:16.990797 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:16.991127 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:17.489899 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:17.489969 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:17.490299 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:17.990075 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:17.990174 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:17.990513 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:18.489977 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:18.490103 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:18.490446 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:18.490500 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:18.654775 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:23:18.717545 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:23:18.717590 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:23:18.989890 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:18.989961 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:18.990331 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:19.490056 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:19.490166 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:19.490474 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:19.989986 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:19.990059 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:19.990413 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:20.489971 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:20.490043 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:20.490401 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:20.990442 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:20.990522 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:20.990905 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:20.990965 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:21.490561 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:21.490647 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:21.491034 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:21.990814 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:21.990880 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:21.991151 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:22.489859 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:22.489933 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:22.490316 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:22.989907 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:22.990010 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:22.990373 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:23.489920 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:23.489988 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:23.490275 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:23.490318 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:23.990022 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:23.990126 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:23.990454 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:24.489985 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:24.490058 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:24.490398 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:24.990113 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:24.990189 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:24.990527 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:25.489940 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:25.490018 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:25.490368 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:25.490426 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:25.989959 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:25.990037 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:25.990391 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:26.489963 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:26.490033 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:26.490358 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:26.989983 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:26.990062 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:26.990471 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:27.490073 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:27.490176 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:27.490476 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:27.490523 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:27.990208 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:27.990284 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:27.990581 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:28.161122 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:23:28.220514 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:23:28.224335 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:23:28.224393 1440600 retry.go:84] will retry after 41.4s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 00:23:28.490932 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:28.491004 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:28.491336 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:28.989906 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:28.989984 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:28.990321 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:29.490024 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:29.490113 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:29.490458 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:29.989986 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:29.990094 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:29.990474 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:29.990557 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:30.490047 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:30.490143 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:30.490458 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:30.990259 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:30.990329 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:30.990655 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:31.490002 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:31.490100 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:31.490411 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:31.990014 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:31.990115 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:31.990469 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:32.490018 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:32.490108 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:32.490375 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:32.490418 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:32.990098 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:32.990177 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:32.990500 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:33.490003 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:33.490108 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:33.490466 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:33.990154 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:33.990234 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:33.990566 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:34.490002 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:34.490090 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:34.490440 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:34.490501 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:34.990013 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:34.990107 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:34.990397 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:35.490066 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:35.490157 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:35.490467 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:35.990431 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:35.990505 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:35.990824 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:36.490453 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:36.490528 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:36.490835 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:36.490884 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:36.990603 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:36.990678 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:36.990954 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:37.490735 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:37.490807 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:37.491181 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:37.990857 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:37.990929 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:37.991230 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:38.489948 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:38.490019 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:38.490349 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:38.990008 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:38.990105 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:38.990436 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:38.990495 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:39.489987 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:39.490063 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:39.490393 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:39.989944 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:39.990040 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:39.990363 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:40.489956 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:40.490045 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:40.490412 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:40.990475 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:40.990549 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:40.990889 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:40.990950 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:41.490672 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:41.490741 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:41.491008 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:41.990780 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:41.990856 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:41.991209 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:42.490871 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:42.490954 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:42.491340 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:42.990007 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:42.990097 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:42.990404 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:43.489956 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:43.490032 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:43.490391 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:43.490443 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:43.989997 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:43.990069 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:43.990424 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:44.490113 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:44.490184 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:44.490504 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:44.989987 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:44.990072 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:44.990468 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:45.490042 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:45.490139 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:45.490488 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:45.490544 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:45.990486 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:45.990563 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:45.990841 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:46.490648 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:46.490719 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:46.491036 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:46.990855 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:46.990935 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:46.991321 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:47.489981 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:47.490047 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:47.490334 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:47.990019 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:47.990115 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:47.990452 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:47.990507 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:48.490178 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:48.490257 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:48.490596 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:48.990176 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:48.990258 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:48.990544 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:49.490811 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:49.490901 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:49.491279 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:49.989874 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:49.989946 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:49.990300 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:50.489924 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:50.490000 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:50.490343 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:50.490397 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:50.990356 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:50.990437 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:50.990752 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:51.490553 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:51.490629 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:51.490975 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:51.990784 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:51.990866 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:51.991140 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:52.489871 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:52.489953 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:52.490301 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:52.989984 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:52.990058 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:52.990419 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:52.990479 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:53.490129 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:53.490202 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:53.490518 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:53.990187 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:53.990262 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:53.990609 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:54.489980 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:54.490055 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:54.490439 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:54.989995 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:54.990072 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:54.990372 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:55.490063 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:55.490153 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:55.490516 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:55.490572 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:55.990452 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:55.990534 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:55.990878 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:56.490649 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:56.490717 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:56.490982 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:56.990754 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:56.990838 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:56.991192 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:57.489902 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:57.489983 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:57.490337 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:57.990010 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:57.990101 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:57.990401 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:57.990458 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:23:58.490135 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:58.490219 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:58.490572 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:58.990291 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:58.990363 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:58.990713 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:59.490474 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:59.490546 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:59.490809 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:23:59.990637 1440600 type.go:165] "Request Body" body=""
	I1222 00:23:59.990713 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:23:59.991064 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:23:59.991122 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:00.490316 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:00.490400 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:00.490862 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:00.990668 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:00.990739 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:00.991087 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:01.490883 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:01.490958 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:01.491270 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:01.989948 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:01.990026 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:01.990325 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:02.490014 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:02.490103 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:02.490477 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:02.490534 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:02.990031 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:02.990131 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:02.990497 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:03.490184 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:03.490262 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:03.490600 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:03.989956 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:03.990032 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:03.990406 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:04.489994 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:04.490072 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:04.490456 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:04.989918 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:04.989996 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:04.990341 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:04.990396 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:05.490050 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:05.490143 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:05.490511 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:05.990488 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:05.990562 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:05.990914 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:06.490753 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:06.490832 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:06.491115 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:06.560484 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 00:24:06.618784 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:24:06.622419 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:24:06.622526 1440600 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1222 00:24:06.989983 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:06.990064 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:06.990383 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:06.990433 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:07.490157 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:07.490233 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:07.490524 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:07.990185 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:07.990270 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:07.990599 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:08.490022 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:08.490115 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:08.490453 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:08.989981 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:08.990060 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:08.990411 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:08.990466 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:09.490075 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:09.490168 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:09.490514 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:09.609944 1440600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 00:24:09.674734 1440600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:24:09.674775 1440600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 00:24:09.674856 1440600 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1222 00:24:09.678207 1440600 out.go:179] * Enabled addons: 
	I1222 00:24:09.681672 1440600 addons.go:530] duration metric: took 1m49.680036347s for enable addons: enabled=[]
	I1222 00:24:09.990006 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:09.990111 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:09.990435 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:10.489989 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:10.490125 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:10.490453 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:10.990344 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:10.990411 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:10.990682 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:10.990727 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:11.490593 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:11.490672 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:11.491056 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:11.990903 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:11.990982 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:11.991278 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:12.489936 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:12.490027 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:12.490339 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:12.990005 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:12.990116 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:12.990431 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:13.489991 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:13.490102 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:13.490441 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:13.490498 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:13.989928 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:13.990002 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:13.990306 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:14.489924 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:14.490000 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:14.490370 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:14.989952 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:14.990029 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:14.990377 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:15.489930 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:15.490007 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:15.495140 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	W1222 00:24:15.495205 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:15.990139 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:15.990214 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:15.990548 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:16.490265 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:16.490341 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:16.490685 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:16.990466 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:16.990534 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:16.990810 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:17.490605 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:17.490678 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:17.491024 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:17.990809 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:17.990888 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:17.991232 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:17.991290 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:18.489966 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:18.490047 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:18.490344 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:18.989974 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:18.990048 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:18.990413 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:19.490152 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:19.490235 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:19.490572 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:19.990193 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:19.990267 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:19.990595 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:20.490013 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:20.490112 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:20.490466 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:20.490529 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:20.990365 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:20.990452 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:20.990874 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:21.490647 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:21.490716 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:21.490974 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:21.990817 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:21.990890 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:21.991258 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:22.489990 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:22.490065 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:22.490412 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:22.989931 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:22.990001 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:22.990291 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:22.990334 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:23.489982 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:23.490072 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:23.490434 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:23.990185 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:23.990260 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:23.990618 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:24.490196 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:24.490263 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:24.490567 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:24.990006 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:24.990100 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:24.990427 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:24.990485 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:25.490302 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:25.490387 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:25.490733 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:25.990623 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:25.990702 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:25.990981 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:26.490787 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:26.490872 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:26.491243 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:26.989906 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:26.989979 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:26.990394 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:27.490103 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:27.490176 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:27.490443 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:27.490482 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:27.989960 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:27.990034 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:27.990413 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:28.490207 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:28.490281 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:28.490622 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:28.990191 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:28.990282 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:28.990631 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:29.490018 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:29.490117 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:29.490494 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:29.490550 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:29.990018 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:29.990122 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:29.990442 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:30.490142 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:30.490217 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:30.490543 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:30.990537 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:30.990608 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:30.990938 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:31.490581 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:31.490653 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:31.490983 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:31.491033 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:31.990778 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:31.990859 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:31.991138 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:32.489907 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:32.489978 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:32.490318 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:32.990007 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:32.990097 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:32.990421 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:33.490061 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:33.490158 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:33.490434 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:33.989970 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:33.990046 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:33.990432 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:33.990495 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:34.490195 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:34.490327 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:34.490668 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:34.990187 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:34.990257 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:34.990639 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:35.490386 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:35.490458 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:35.490812 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:35.990390 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:35.990464 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:35.990796 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:35.990852 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:36.490573 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:36.490651 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:36.490929 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:36.990756 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:36.990829 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:36.991171 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:37.489889 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:37.489967 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:37.490339 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:37.990024 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:37.990114 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:37.990447 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:38.489982 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:38.490058 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:38.490429 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:38.490484 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:38.990183 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:38.990264 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:38.990605 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:39.490191 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:39.490269 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:39.490588 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:39.990227 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:39.990305 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:39.990674 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:40.490443 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:40.490519 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:40.490858 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:40.490915 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:40.990684 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:40.990753 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:40.991032 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:41.490794 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:41.490870 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:41.491216 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:41.989955 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:41.990032 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:41.990370 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:42.489956 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:42.490024 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:42.490310 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:42.989997 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:42.990074 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:42.990458 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:42.990518 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:43.489986 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:43.490062 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:43.490435 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:43.990126 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:43.990202 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:43.990538 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:44.489985 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:44.490061 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:44.490436 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:44.990151 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:44.990227 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:44.990551 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:44.990601 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:45.490191 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:45.490261 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:45.490538 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:45.990641 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:45.990724 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:45.991072 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:46.491707 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:46.491786 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:46.492142 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:46.989879 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:46.989948 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:46.990262 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:47.489938 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:47.490019 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:47.490372 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:47.490436 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:47.989976 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:47.990058 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:47.990414 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:48.490104 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:48.490184 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:48.490506 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:48.989980 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:48.990052 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:48.990405 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:49.490140 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:49.490223 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:49.490576 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:49.490637 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:49.990206 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:49.990276 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:49.990555 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:50.489985 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:50.490065 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:50.490434 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:50.990242 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:50.990326 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:50.990658 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:51.490186 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:51.490262 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:51.490593 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:51.989990 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:51.990069 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:51.990435 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:51.990489 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:52.490183 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:52.490256 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:52.490603 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:52.990255 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:52.990324 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:52.990635 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:53.489964 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:53.490036 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:53.490363 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:53.990001 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:53.990075 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:53.990423 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:54.489924 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:54.489996 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:54.490285 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:54.490333 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:54.989975 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:54.990055 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:54.990468 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:55.490011 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:55.490103 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:55.490421 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:55.990314 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:55.990389 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:55.990657 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:56.490486 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:56.490557 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:56.490869 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:56.490916 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:56.990725 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:56.990802 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:56.991133 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:57.490900 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:57.490974 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:57.491290 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:57.989972 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:57.990051 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:57.990419 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:58.490136 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:58.490217 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:58.490543 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:58.990192 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:58.990263 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:58.990550 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:24:58.990605 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:24:59.490266 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:59.490351 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:59.490696 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:24:59.990527 1440600 type.go:165] "Request Body" body=""
	I1222 00:24:59.990604 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:24:59.990950 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:00.490941 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:00.491023 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:00.491350 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:00.990417 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:00.990491 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:00.990807 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:00.990855 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:01.490637 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:01.490718 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:01.491102 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:01.990857 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:01.990933 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:01.991226 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:02.489934 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:02.490011 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:02.490376 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:02.990107 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:02.990182 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:02.990528 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:03.490191 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:03.490258 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:03.490603 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:03.490666 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:03.990323 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:03.990398 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:03.990774 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:04.490099 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:04.490181 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:04.490541 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:04.990232 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:04.990304 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:04.990598 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:05.489984 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:05.490070 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:05.490474 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:05.990399 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:05.990480 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:05.990832 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:05.990887 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:06.490635 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:06.490717 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:06.491014 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:06.990839 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:06.990944 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:06.991373 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:07.490014 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:07.490104 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:07.490446 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:07.990134 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:07.990205 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:07.990514 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:08.489962 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:08.490044 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:08.490424 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:08.490484 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:08.990161 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:08.990242 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:08.990563 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:09.490209 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:09.490287 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:09.490623 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:09.989971 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:09.990050 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:09.990419 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:10.490169 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:10.490256 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:10.490641 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:10.490691 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:10.990484 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:10.990556 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:10.990880 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:11.490691 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:11.490770 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:11.491148 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:11.989909 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:11.989998 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:11.990370 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:12.490057 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:12.490147 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:12.490467 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:12.990174 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:12.990252 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:12.990598 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:12.990662 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:13.490189 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:13.490279 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:13.490632 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:13.990187 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:13.990261 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:13.990530 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:14.490218 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:14.490295 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:14.490648 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:14.990225 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:14.990310 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:14.990701 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:14.990757 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:15.490513 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:15.490584 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:15.490919 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:15.990746 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:15.990844 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:15.991183 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:16.489918 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:16.490001 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:16.490332 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:16.989935 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:16.990006 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:16.990305 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:17.489916 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:17.489993 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:17.490353 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:17.490411 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:17.989982 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:17.990067 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:17.990440 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:18.489992 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:18.490059 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:18.490344 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:18.989986 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:18.990062 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:18.990446 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:19.489988 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:19.490067 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:19.490439 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:19.490499 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:19.990000 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:19.990097 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:19.990384 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:20.490061 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:20.490152 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:20.490473 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:20.990446 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:20.990524 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:20.990901 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:21.490562 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:21.490638 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:21.490935 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:21.490983 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:21.990766 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:21.990844 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:21.991203 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:22.489952 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:22.490028 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:22.490383 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:22.989946 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:22.990018 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:22.990395 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:23.490115 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:23.490193 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:23.490519 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:23.990250 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:23.990328 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:23.990658 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:23.990712 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:24.490191 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:24.490259 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:24.490564 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:24.990067 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:24.990171 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:24.990519 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:25.490112 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:25.490189 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:25.490544 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:25.990337 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:25.990408 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:25.990679 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:26.490001 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:26.490100 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:26.490493 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:26.490549 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:26.989985 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:26.990060 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:26.990450 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:27.489933 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:27.490004 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:27.490303 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:27.989990 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:27.990064 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:27.990419 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:28.489996 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:28.490100 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:28.490409 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:28.989926 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:28.990004 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:28.990293 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:28.990335 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:29.489955 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:29.490038 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:29.490419 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:29.989986 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:29.990064 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:29.990428 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:30.490113 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:30.490182 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:30.490466 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:30.990514 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:30.990608 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:30.990968 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:30.991038 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:31.490801 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:31.490876 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:31.491238 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:31.989938 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:31.990010 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:31.990319 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:32.490105 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:32.490200 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:32.490583 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:32.990011 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:32.990107 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:32.990457 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:33.489930 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:33.490001 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:33.490356 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:33.490407 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:33.989968 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:33.990049 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:33.990398 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:34.490108 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:34.490195 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:34.490523 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:34.989925 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:34.990023 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:34.990366 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:35.489971 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:35.490049 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:35.490369 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:35.990125 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:35.990203 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:35.990545 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:35.990599 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:36.490187 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:36.490257 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:36.490569 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:36.990016 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:36.990105 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:36.990430 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:37.490182 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:37.490252 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:37.490574 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:37.990191 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:37.990266 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:37.990607 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:37.990658 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:38.490325 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:38.490406 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:38.490727 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:38.990379 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:38.990457 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:38.990798 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:39.490205 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:39.490279 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:39.490564 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:39.989982 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:39.990073 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:39.990463 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:40.489978 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:40.490060 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:40.490394 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:40.490452 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:40.990349 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:40.990422 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:40.990727 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:41.490014 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:41.490108 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:41.490471 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:41.989961 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:41.990033 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:41.990356 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:42.489960 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:42.490027 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:42.490300 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:42.990007 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:42.990099 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:42.990440 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:42.990502 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:43.490037 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:43.490130 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:43.490432 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:43.989985 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:43.990105 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:43.990442 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:44.489978 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:44.490053 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:44.490424 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:44.990010 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:44.990121 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:44.990473 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:44.990528 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:45.490187 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:45.490258 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:45.490577 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:45.990696 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:45.990768 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:45.991105 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:46.490888 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:46.490961 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:46.491348 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:46.989888 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:46.989967 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:46.990307 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:47.489963 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:47.490035 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:47.490377 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:47.490435 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:47.990002 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:47.990111 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:47.990456 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:48.490130 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:48.490210 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:48.490499 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:48.990197 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:48.990271 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:48.990616 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:49.490301 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:49.490378 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:49.490708 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:49.490767 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:49.990515 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:49.990590 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:49.990888 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:50.490679 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:50.490750 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:50.491091 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:50.990830 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:50.990903 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:50.991524 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:51.490235 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:51.490311 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:51.490637 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:51.990347 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:51.990422 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:51.990726 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:51.990776 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:52.490531 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:52.490608 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:52.490905 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:52.990690 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:52.990761 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:52.991011 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:53.490818 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:53.490896 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:53.491226 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:53.989948 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:53.990031 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:53.990354 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:54.489938 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:54.490015 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:54.490377 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:54.490441 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:54.990011 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:54.990111 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:54.990418 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:55.489958 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:55.490031 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:55.490422 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:55.990385 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:55.990459 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:55.990735 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:56.490568 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:56.490669 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:56.490961 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:56.491006 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:56.990756 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:56.990829 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:56.991170 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:57.489861 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:57.489931 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:57.490260 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:57.989963 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:57.990042 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:57.990407 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:58.490137 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:58.490214 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:58.490541 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:58.990214 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:58.990306 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:58.990624 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:25:58.990667 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:25:59.489984 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:59.490062 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:59.490433 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:25:59.990163 1440600 type.go:165] "Request Body" body=""
	I1222 00:25:59.990236 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:25:59.990615 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:00.490206 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:00.490289 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:00.490597 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:00.990658 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:00.990731 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:00.991092 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:00.991152 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:01.490884 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:01.490964 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:01.491316 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:01.990041 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:01.990133 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:01.990455 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:02.490830 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:02.490906 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:02.491270 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:02.989994 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:02.990098 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:02.990430 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:03.489931 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:03.490002 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:03.490337 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:03.490390 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:03.990024 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:03.990121 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:03.990467 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:04.490179 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:04.490255 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:04.490608 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:04.990184 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:04.990260 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:04.990581 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:05.489977 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:05.490051 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:05.490398 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:05.490456 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:05.990446 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:05.990523 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:05.990849 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:06.490609 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:06.490678 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:06.490954 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:06.990809 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:06.990889 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:06.991238 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:07.489962 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:07.490037 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:07.490389 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:07.989985 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:07.990051 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:07.990334 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:07.990374 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:08.490043 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:08.490140 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:08.490496 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:08.989967 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:08.990041 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:08.990388 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:09.490116 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:09.490235 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:09.490498 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:09.990254 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:09.990339 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:09.990808 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:09.990886 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:10.490650 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:10.490731 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:10.491042 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:10.990825 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:10.990904 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:10.991173 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:11.489852 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:11.489924 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:11.490252 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:11.989997 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:11.990073 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:11.990444 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:12.490161 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:12.490230 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:12.490508 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:12.490550 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:12.989975 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:12.990050 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:12.990399 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:13.489977 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:13.490061 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:13.490418 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:13.989935 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:13.990009 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:13.990308 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:14.490050 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:14.490144 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:14.490473 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:14.989961 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:14.990046 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:14.990441 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:14.990497 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:15.490170 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:15.490244 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:15.490586 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:15.990615 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:15.990697 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:15.991007 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:16.490796 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:16.490870 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:16.491907 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:16.990669 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:16.990740 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:16.991032 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:16.991078 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:17.490833 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:17.490913 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:17.491260 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:17.989966 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:17.990045 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:17.990373 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:18.489932 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:18.490002 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:18.490301 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:18.989992 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:18.990073 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:18.990433 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:19.490149 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:19.490224 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:19.490593 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:19.490656 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:19.990226 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:19.990300 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:19.990617 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:20.489972 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:20.490047 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:20.490362 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:20.990101 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:20.990177 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:20.990509 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:21.490185 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:21.490264 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:21.490595 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:21.989954 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:21.990032 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:21.990395 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:21.990455 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:22.489962 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:22.490045 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:22.490385 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:22.989930 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:22.990003 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:22.990341 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:23.490027 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:23.490129 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:23.490477 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:23.990190 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:23.990277 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:23.990691 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:23.990747 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:24.490514 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:24.490583 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:24.490927 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:24.990720 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:24.990794 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:24.991140 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:25.490900 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:25.490979 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:25.491379 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:25.990094 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:25.990175 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:25.990521 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:26.490373 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:26.490449 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:26.490797 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:26.490855 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:26.990580 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:26.990656 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:26.991034 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:27.490724 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:27.490790 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:27.491046 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:27.990825 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:27.990904 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:27.991259 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:28.490911 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:28.490985 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:28.491318 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:28.491372 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:28.989928 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:28.990007 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:28.990342 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:29.489980 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:29.490066 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:29.490444 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:29.989982 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:29.990058 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:29.990411 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:30.490133 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:30.490213 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:30.490478 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:30.990413 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:30.990487 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:30.990822 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:30.990881 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:31.490635 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:31.490709 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:31.491051 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:31.990823 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:31.990893 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:31.991165 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:32.490952 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:32.491029 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:32.491373 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:32.989955 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:32.990035 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:32.990378 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:33.489931 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:33.490002 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:33.490316 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:33.490362 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:33.989947 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:33.990021 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:33.990372 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:34.490116 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:34.490197 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:34.490500 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:34.989909 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:34.989977 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:34.990263 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:35.489946 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:35.490026 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:35.490341 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:35.490389 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:35.990283 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:35.990360 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:35.990662 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:36.490184 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:36.490280 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:36.490578 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:36.990001 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:36.990095 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:36.990429 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:37.490145 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:37.490220 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:37.490554 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:37.490609 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:37.990015 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:37.990097 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:37.990389 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:38.490110 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:38.490185 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:38.490492 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:38.990019 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:38.990118 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:38.990456 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:39.490153 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:39.490226 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:39.490496 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:39.990199 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:39.990276 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:39.990657 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:39.990715 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:40.490392 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:40.490536 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:40.490920 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:40.990760 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:40.990841 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:40.991131 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:41.490923 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:41.490995 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:41.491302 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:41.990033 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:41.990133 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:41.990472 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:42.490153 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:42.490221 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:42.490485 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:42.490527 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:42.990002 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:42.990101 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:42.990413 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:43.489980 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:43.490064 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:43.490432 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:43.989938 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:43.990011 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:43.990364 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:44.489959 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:44.490042 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:44.490416 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:44.989995 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:44.990102 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:44.990462 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:44.990519 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:45.490179 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:45.490260 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:45.490574 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:45.990602 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:45.990682 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:45.991037 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:46.490835 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:46.490909 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:46.491279 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:46.989941 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:46.990018 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:46.990385 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:47.489947 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:47.490027 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:47.490374 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:47.490435 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:47.989971 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:47.990053 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:47.990422 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:48.490143 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:48.490233 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:48.490499 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:48.990174 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:48.990256 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:48.990580 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:49.490307 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:49.490383 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:49.490720 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:49.490772 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:49.990521 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:49.990599 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:49.990879 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:50.490635 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:50.490716 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:50.491079 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:50.990724 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:50.990795 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:50.991088 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:51.490793 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:51.490872 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:51.491153 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:51.491198 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:51.989975 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:51.990061 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:51.990446 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:52.489977 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:52.490057 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:52.490428 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:52.990118 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:52.990246 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:52.990561 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:53.489966 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:53.490052 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:53.490415 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:53.989985 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:53.990063 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:53.990421 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:53.990481 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:54.489936 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:54.490008 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:54.490340 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:54.989971 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:54.990045 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:54.990421 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:55.490133 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:55.490213 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:55.490553 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:55.990361 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:55.990433 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:55.990726 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:55.990770 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:56.490522 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:56.490596 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:56.490941 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:56.990624 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:56.990700 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:56.991017 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:57.490621 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:57.490692 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:57.490956 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:57.990832 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:57.990908 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:57.991282 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:26:57.991347 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:26:58.489976 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:58.490053 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:58.490386 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:58.989930 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:58.989998 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:58.990284 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:59.489983 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:59.490073 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:59.490464 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:26:59.990192 1440600 type.go:165] "Request Body" body=""
	I1222 00:26:59.990275 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:26:59.990617 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:00.490281 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:00.490364 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:00.490677 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:00.490726 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:00.990700 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:00.990777 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:00.991140 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:01.490816 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:01.490903 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:01.491267 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:01.989976 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:01.990054 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:01.990417 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:02.489984 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:02.490060 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:02.490437 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:02.990175 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:02.990253 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:02.990605 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:02.990670 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:03.490193 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:03.490272 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:03.490631 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:03.990322 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:03.990405 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:03.990824 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:04.490523 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:04.490601 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:04.490958 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:04.990688 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:04.990756 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:04.991031 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:04.991073 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:05.490786 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:05.490863 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:05.491193 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:05.990885 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:05.990960 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:05.991336 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:06.489989 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:06.490060 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:06.490367 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:06.990048 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:06.990148 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:06.990505 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:07.490226 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:07.490304 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:07.490653 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:07.490707 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:07.990247 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:07.990320 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:07.990637 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:08.489976 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:08.490052 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:08.490406 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:08.990024 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:08.990129 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:08.990481 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:09.489938 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:09.490010 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:09.490353 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:09.989955 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:09.990030 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:09.990415 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:09.990473 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:10.490149 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:10.490228 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:10.490601 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:10.990609 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:10.990681 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:10.990963 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:11.490826 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:11.490912 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:11.491261 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:11.989991 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:11.990068 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:11.990456 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:11.990514 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:12.489967 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:12.490038 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:12.490323 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:12.989997 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:12.990096 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:12.990418 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:13.489963 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:13.490038 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:13.490407 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:13.990095 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:13.990171 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:13.990492 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:13.990549 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:14.489992 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:14.490067 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:14.490428 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:14.990144 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:14.990224 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:14.990592 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:15.490207 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:15.490277 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:15.490570 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:15.990543 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:15.990628 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:15.991069 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:15.991135 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:16.490872 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:16.490956 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:16.491310 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:16.989967 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:16.990053 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:16.990388 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:17.490123 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:17.490206 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:17.490561 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:17.990295 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:17.990377 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:17.990730 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:18.490453 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:18.490522 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:18.490787 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:18.490828 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:18.990605 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:18.990682 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:18.991041 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:19.490876 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:19.490953 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:19.491342 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:19.990007 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:19.990107 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:19.990413 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:20.489981 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:20.490059 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:20.490407 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:20.990447 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:20.990519 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:20.990864 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:20.990919 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:21.490621 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:21.490700 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:21.490968 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:21.990741 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:21.990818 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:21.991152 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:22.490904 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:22.490981 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:22.491320 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:22.989871 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:22.989939 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:22.990221 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:23.489972 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:23.490045 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:23.490356 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:23.490418 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:23.989980 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:23.990055 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:23.990419 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:24.489968 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:24.490036 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:24.490402 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:24.989977 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:24.990052 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:24.990381 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:25.489961 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:25.490041 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:25.490383 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:25.989925 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:25.989999 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:25.990314 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:25.990363 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:26.490009 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:26.490096 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:26.490426 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:26.990011 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:26.990100 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:26.990405 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:27.490090 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:27.490172 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:27.490501 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:27.989983 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:27.990059 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:27.990431 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:27.990490 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:28.490022 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:28.490121 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:28.490494 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:28.990112 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:28.990185 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:28.990505 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:29.490221 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:29.490292 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:29.490664 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:29.990003 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:29.990074 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:29.990426 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:30.490150 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:30.490225 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:30.490541 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:30.490592 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:30.990489 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:30.990567 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:30.990902 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:31.490527 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:31.490606 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:31.490937 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:31.990701 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:31.990772 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:31.991052 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:32.490780 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:32.490863 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:32.491194 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:32.491251 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:32.989979 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:32.990061 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:32.990468 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:33.489931 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:33.490000 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:33.490333 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:33.989999 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:33.990090 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:33.990434 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:34.490164 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:34.490237 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:34.490525 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:34.990013 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:34.990133 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:34.990419 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:34.990463 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:35.489998 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:35.490072 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:35.490402 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:35.990424 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:35.990500 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:35.990847 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:36.490629 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:36.490699 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:36.491002 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:36.990786 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:36.990862 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:36.991205 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:36.991272 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:37.489933 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:37.490007 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:37.490341 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:37.990047 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:37.990142 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:37.990426 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:38.489979 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:38.490062 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:38.490435 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:38.990029 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:38.990127 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:38.990498 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:39.490187 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:39.490257 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:39.490523 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:39.490565 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:39.989972 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:39.990052 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:39.990405 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:40.490020 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:40.490112 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:40.490458 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:40.990313 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:40.990393 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:40.990738 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:41.490512 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:41.490590 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:41.490933 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:41.490989 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:41.990751 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:41.990828 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:41.991149 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:42.489855 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:42.489927 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:42.490220 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:42.989969 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:42.990051 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:42.990392 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:43.489963 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:43.490039 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:43.490355 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:43.989943 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:43.990021 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:43.990364 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:43.990421 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:44.490068 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:44.490167 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:44.490496 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:44.989975 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:44.990048 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:44.990405 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:45.490098 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:45.490165 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:45.490445 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:45.990499 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:45.990579 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:45.990932 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:45.990988 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:46.490765 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:46.490851 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:46.491199 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:46.989903 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:46.989975 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:46.990267 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:47.489959 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:47.490040 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:47.490410 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:47.990193 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:47.990272 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:47.990633 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:48.490191 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:48.490268 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:48.490557 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:48.490605 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:48.989973 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:48.990054 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:48.990411 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:49.490121 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:49.490203 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:49.490602 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:49.990180 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:49.990256 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:49.990573 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:50.490264 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:50.490334 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:50.490701 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:50.490762 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:50.990600 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:50.990676 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:50.991023 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:51.490816 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:51.490888 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:51.491166 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:51.989883 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:51.989958 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:51.990326 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:52.489958 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:52.490029 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:52.490386 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:52.989930 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:52.990008 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:52.990311 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:52.990362 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:53.489946 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:53.490023 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:53.490371 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:53.989949 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:53.990030 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:53.990375 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:54.490788 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:54.490862 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:54.491123 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:54.990890 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:54.990969 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:54.991274 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:54.991322 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:55.489946 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:55.490034 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:55.490395 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:55.990183 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:55.990253 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:55.990532 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:56.490206 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:56.490281 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:56.490594 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:56.989987 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:56.990060 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:56.990406 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:57.490128 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:57.490205 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:57.490478 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:57.490521 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:57.990207 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:57.990282 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:57.990685 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:58.490512 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:58.490591 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:58.490950 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:58.990719 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:58.990795 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:58.991070 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:27:59.490852 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:59.490926 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:59.491272 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:27:59.491325 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:27:59.989994 1440600 type.go:165] "Request Body" body=""
	I1222 00:27:59.990075 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:27:59.990477 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:00.491169 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:00.491258 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:00.491580 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:00.990547 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:00.990624 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:00.991006 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:01.490798 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:01.490875 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:01.491244 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:01.989992 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:01.990060 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:01.990372 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:01.990421 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:02.490072 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:02.490171 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:02.490504 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:02.990211 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:02.990296 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:02.990636 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:03.490179 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:03.490250 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:03.490508 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:03.989979 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:03.990056 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:03.990393 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:03.990451 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:04.489988 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:04.490064 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:04.490417 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:04.989923 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:04.990006 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:04.990341 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:05.490018 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:05.490111 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:05.490452 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:05.989994 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:05.990072 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:05.990422 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:05.990476 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:06.490127 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:06.490204 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:06.490473 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:06.989958 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:06.990037 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:06.990380 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:07.489982 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:07.490060 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:07.490431 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:07.990121 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:07.990189 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:07.990482 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:07.990525 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:08.489974 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:08.490066 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:08.490425 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:08.990156 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:08.990234 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:08.990576 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:09.490202 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:09.490271 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:09.490542 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:09.989972 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:09.990045 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:09.990377 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:10.490071 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:10.490165 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:10.490532 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:10.490597 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:10.990314 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:10.990397 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:10.990739 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:11.490506 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:11.490591 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:11.490943 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:11.990748 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:11.990830 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:11.991167 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:12.489852 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:12.489923 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:12.490225 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:12.989970 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:12.990044 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:12.990403 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:12.990470 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:13.489984 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:13.490059 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:13.490416 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:13.990123 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:13.990198 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:13.990462 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:14.489978 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:14.490054 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:14.490452 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:14.990189 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:14.990264 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:14.990627 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:14.990691 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:15.490186 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:15.490254 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:15.490558 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:15.990404 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:15.990479 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:15.990821 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:16.490635 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:16.490717 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:16.491027 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:16.990834 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:16.990930 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:16.991327 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:16.991378 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:17.489874 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:17.489955 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:17.490319 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:17.990029 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:17.990129 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:17.990461 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:18.489952 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:18.490024 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:18.490359 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:18.989974 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:18.990051 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:18.990415 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:19.489997 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:19.490101 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:19.490461 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 00:28:19.490518 1440600 node_ready.go:55] error getting node "functional-973657" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-973657": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 00:28:19.990172 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:19.990243 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:19.990549 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:20.489961 1440600 type.go:165] "Request Body" body=""
	I1222 00:28:20.490039 1440600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-973657" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1222 00:28:20.490384 1440600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 00:28:20.990305 1440600 node_ready.go:38] duration metric: took 6m0.000552396s for node "functional-973657" to be "Ready" ...
	I1222 00:28:20.993510 1440600 out.go:203] 
	W1222 00:28:20.996431 1440600 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1222 00:28:20.996456 1440600 out.go:285] * 
	W1222 00:28:20.998594 1440600 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1222 00:28:21.002257 1440600 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 22 00:28:28 functional-973657 containerd[5251]: time="2025-12-22T00:28:28.341976059Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 22 00:28:29 functional-973657 containerd[5251]: time="2025-12-22T00:28:29.376832555Z" level=info msg="No images store for sha256:a1f83055284ec302ac691d8677946d8b4e772fb7071d39ada1cc9184cb70814b"
	Dec 22 00:28:29 functional-973657 containerd[5251]: time="2025-12-22T00:28:29.379095838Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:latest\""
	Dec 22 00:28:29 functional-973657 containerd[5251]: time="2025-12-22T00:28:29.386895507Z" level=info msg="ImageCreate event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 22 00:28:29 functional-973657 containerd[5251]: time="2025-12-22T00:28:29.387397526Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 22 00:28:30 functional-973657 containerd[5251]: time="2025-12-22T00:28:30.413533041Z" level=info msg="No images store for sha256:752b9ba1e553bacfdee75fccc26fb899d1f930e210eb3b7f0c4eebd90988bda3"
	Dec 22 00:28:30 functional-973657 containerd[5251]: time="2025-12-22T00:28:30.416392292Z" level=info msg="ImageCreate event name:\"docker.io/library/minikube-local-cache-test:functional-973657\""
	Dec 22 00:28:30 functional-973657 containerd[5251]: time="2025-12-22T00:28:30.425048683Z" level=info msg="ImageCreate event name:\"sha256:082049fa7835e23c46c09f80be520b2afb0d7d032957be9d461df564fef85ac1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 22 00:28:30 functional-973657 containerd[5251]: time="2025-12-22T00:28:30.425530361Z" level=info msg="ImageUpdate event name:\"docker.io/library/minikube-local-cache-test:functional-973657\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 22 00:28:31 functional-973657 containerd[5251]: time="2025-12-22T00:28:31.218038733Z" level=info msg="RemoveImage \"registry.k8s.io/pause:latest\""
	Dec 22 00:28:31 functional-973657 containerd[5251]: time="2025-12-22T00:28:31.220631093Z" level=info msg="ImageDelete event name:\"registry.k8s.io/pause:latest\""
	Dec 22 00:28:31 functional-973657 containerd[5251]: time="2025-12-22T00:28:31.223026347Z" level=info msg="ImageDelete event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\""
	Dec 22 00:28:31 functional-973657 containerd[5251]: time="2025-12-22T00:28:31.235865776Z" level=info msg="RemoveImage \"registry.k8s.io/pause:latest\" returns successfully"
	Dec 22 00:28:32 functional-973657 containerd[5251]: time="2025-12-22T00:28:32.124332900Z" level=info msg="RemoveImage \"registry.k8s.io/pause:3.1\""
	Dec 22 00:28:32 functional-973657 containerd[5251]: time="2025-12-22T00:28:32.126924874Z" level=info msg="ImageDelete event name:\"registry.k8s.io/pause:3.1\""
	Dec 22 00:28:32 functional-973657 containerd[5251]: time="2025-12-22T00:28:32.130007191Z" level=info msg="ImageDelete event name:\"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5\""
	Dec 22 00:28:32 functional-973657 containerd[5251]: time="2025-12-22T00:28:32.136871755Z" level=info msg="RemoveImage \"registry.k8s.io/pause:3.1\" returns successfully"
	Dec 22 00:28:32 functional-973657 containerd[5251]: time="2025-12-22T00:28:32.303412403Z" level=info msg="No images store for sha256:a1f83055284ec302ac691d8677946d8b4e772fb7071d39ada1cc9184cb70814b"
	Dec 22 00:28:32 functional-973657 containerd[5251]: time="2025-12-22T00:28:32.305607853Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:latest\""
	Dec 22 00:28:32 functional-973657 containerd[5251]: time="2025-12-22T00:28:32.312621694Z" level=info msg="ImageCreate event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 22 00:28:32 functional-973657 containerd[5251]: time="2025-12-22T00:28:32.313589965Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 22 00:28:32 functional-973657 containerd[5251]: time="2025-12-22T00:28:32.438075263Z" level=info msg="No images store for sha256:3ac89611d5efd8eb74174b1f04c33b7e73b651cec35b5498caf0cfdd2efd7d48"
	Dec 22 00:28:32 functional-973657 containerd[5251]: time="2025-12-22T00:28:32.440318607Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.1\""
	Dec 22 00:28:32 functional-973657 containerd[5251]: time="2025-12-22T00:28:32.450859601Z" level=info msg="ImageCreate event name:\"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 22 00:28:32 functional-973657 containerd[5251]: time="2025-12-22T00:28:32.451513802Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:28:36.452613    9400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:28:36.453711    9400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:28:36.454486    9400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:28:36.455967    9400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:28:36.456276    9400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec21 21:36] overlayfs: idmapped layers are currently not supported
	[ +34.220408] overlayfs: idmapped layers are currently not supported
	[Dec21 21:37] overlayfs: idmapped layers are currently not supported
	[Dec21 21:38] overlayfs: idmapped layers are currently not supported
	[Dec21 21:39] overlayfs: idmapped layers are currently not supported
	[ +42.728151] overlayfs: idmapped layers are currently not supported
	[Dec21 21:40] overlayfs: idmapped layers are currently not supported
	[Dec21 21:42] overlayfs: idmapped layers are currently not supported
	[Dec21 21:43] overlayfs: idmapped layers are currently not supported
	[Dec21 22:01] overlayfs: idmapped layers are currently not supported
	[Dec21 22:03] overlayfs: idmapped layers are currently not supported
	[Dec21 22:04] overlayfs: idmapped layers are currently not supported
	[Dec21 22:06] overlayfs: idmapped layers are currently not supported
	[Dec21 22:07] overlayfs: idmapped layers are currently not supported
	[Dec21 22:09] kauditd_printk_skb: 8 callbacks suppressed
	[Dec21 22:19] FS-Cache: Duplicate cookie detected
	[  +0.000799] FS-Cache: O-cookie c=000001b7 [p=00000002 fl=222 nc=0 na=1]
	[  +0.000997] FS-Cache: O-cookie d=000000006644c6a1{9P.session} n=0000000059d48210
	[  +0.001156] FS-Cache: O-key=[10] '34333231303139373837'
	[  +0.000780] FS-Cache: N-cookie c=000001b8 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000956] FS-Cache: N-cookie d=000000006644c6a1{9P.session} n=000000007a8030ee
	[  +0.001187] FS-Cache: N-key=[10] '34333231303139373837'
	[Dec22 00:03] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 00:28:36 up 1 day,  7:11,  0 user,  load average: 0.32, 0.31, 0.87
	Linux functional-973657 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 22 00:28:33 functional-973657 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:28:33 functional-973657 kubelet[9179]: E1222 00:28:33.557984    9179 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 00:28:33 functional-973657 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 00:28:33 functional-973657 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 00:28:34 functional-973657 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 826.
	Dec 22 00:28:34 functional-973657 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:28:34 functional-973657 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:28:34 functional-973657 kubelet[9265]: E1222 00:28:34.297056    9265 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 00:28:34 functional-973657 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 00:28:34 functional-973657 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 00:28:34 functional-973657 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 827.
	Dec 22 00:28:34 functional-973657 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:28:34 functional-973657 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:28:35 functional-973657 kubelet[9287]: E1222 00:28:35.061372    9287 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 00:28:35 functional-973657 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 00:28:35 functional-973657 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 00:28:35 functional-973657 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 828.
	Dec 22 00:28:35 functional-973657 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:28:35 functional-973657 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:28:35 functional-973657 kubelet[9316]: E1222 00:28:35.807994    9316 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 00:28:35 functional-973657 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 00:28:35 functional-973657 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 00:28:36 functional-973657 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 829.
	Dec 22 00:28:36 functional-973657 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:28:36 functional-973657 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-973657 -n functional-973657
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-973657 -n functional-973657: exit status 2 (349.594573ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-973657" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly (2.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig (733.34s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-973657 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1222 00:31:29.154308 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/addons-984861/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:33:07.826612 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-722318/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:34:30.873393 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-722318/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:36:29.154341 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/addons-984861/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:38:07.832696 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-722318/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-973657 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 109 (12m11.247735153s)

                                                
                                                
-- stdout --
	* [functional-973657] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22179
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22179-1395000/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1395000/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-973657" primary control-plane node in "functional-973657" cluster
	* Pulling base image v0.0.48-1766219634-22260 ...
	* Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.1 ...
	  - apiserver.enable-admission-plugins=NamespaceAutoProvision
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00033138s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000238043s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000238043s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
functional_test.go:774: failed to restart minikube. args "out/minikube-linux-arm64 start -p functional-973657 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 109
functional_test.go:776: restart took 12m11.250818613s for "functional-973657" cluster.
I1222 00:40:48.661774 1396864 config.go:182] Loaded profile config "functional-973657": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-973657
helpers_test.go:244: (dbg) docker inspect functional-973657:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "66803363da2c02a83814dda1c0764d3abdab5acc630ac08f6a997102221d51a1",
	        "Created": "2025-12-22T00:13:58.968084222Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1435135,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-22T00:13:59.032592154Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:065a636b8735485f57df2b02ed6532902f189a9c5dc304ae0ae68a778e1c9b2c",
	        "ResolvConfPath": "/var/lib/docker/containers/66803363da2c02a83814dda1c0764d3abdab5acc630ac08f6a997102221d51a1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/66803363da2c02a83814dda1c0764d3abdab5acc630ac08f6a997102221d51a1/hostname",
	        "HostsPath": "/var/lib/docker/containers/66803363da2c02a83814dda1c0764d3abdab5acc630ac08f6a997102221d51a1/hosts",
	        "LogPath": "/var/lib/docker/containers/66803363da2c02a83814dda1c0764d3abdab5acc630ac08f6a997102221d51a1/66803363da2c02a83814dda1c0764d3abdab5acc630ac08f6a997102221d51a1-json.log",
	        "Name": "/functional-973657",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-973657:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-973657",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "66803363da2c02a83814dda1c0764d3abdab5acc630ac08f6a997102221d51a1",
	                "LowerDir": "/var/lib/docker/overlay2/5e1ad55fc7940958673405b2a5d9d7701d300a0b94ebc0c871b8eb28331634c7-init/diff:/var/lib/docker/overlay2/fdfc0dccbb3b6766b7981a5669907f62e120bbe774767ccbee34e6115374625f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5e1ad55fc7940958673405b2a5d9d7701d300a0b94ebc0c871b8eb28331634c7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5e1ad55fc7940958673405b2a5d9d7701d300a0b94ebc0c871b8eb28331634c7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5e1ad55fc7940958673405b2a5d9d7701d300a0b94ebc0c871b8eb28331634c7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-973657",
	                "Source": "/var/lib/docker/volumes/functional-973657/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-973657",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-973657",
	                "name.minikube.sigs.k8s.io": "functional-973657",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9a7415dc7cc8c69d402ee20ae768f9dea6a8f4f19a78dc21532ae8d42f3e7899",
	            "SandboxKey": "/var/run/docker/netns/9a7415dc7cc8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38390"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38391"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38394"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38392"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38393"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-973657": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ee:06:b1:ad:4a:31",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1e42e4d66b505bfd04d5446c52717be20340997733545e1d4203ef38f80c0dbb",
	                    "EndpointID": "cfb1e4a2d5409c3c16cc85466e1253884fb8124967c81f53a3b06011e2792928",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-973657",
	                        "66803363da2c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-973657 -n functional-973657
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-973657 -n functional-973657: exit status 2 (344.459726ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                         ARGS                                                                          │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-722318 image ls --format yaml --alsologtostderr                                                                                            │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ 22 Dec 25 00:13 UTC │
	│ ssh     │ functional-722318 ssh pgrep buildkitd                                                                                                                 │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │                     │
	│ image   │ functional-722318 image build -t localhost/my-image:functional-722318 testdata/build --alsologtostderr                                                │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ 22 Dec 25 00:13 UTC │
	│ image   │ functional-722318 image ls --format json --alsologtostderr                                                                                            │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ 22 Dec 25 00:13 UTC │
	│ image   │ functional-722318 image ls --format table --alsologtostderr                                                                                           │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ 22 Dec 25 00:13 UTC │
	│ image   │ functional-722318 image ls                                                                                                                            │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ 22 Dec 25 00:13 UTC │
	│ delete  │ -p functional-722318                                                                                                                                  │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ 22 Dec 25 00:13 UTC │
	│ start   │ -p functional-973657 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │                     │
	│ start   │ -p functional-973657 --alsologtostderr -v=8                                                                                                           │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:22 UTC │                     │
	│ cache   │ functional-973657 cache add registry.k8s.io/pause:3.1                                                                                                 │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:28 UTC │ 22 Dec 25 00:28 UTC │
	│ cache   │ functional-973657 cache add registry.k8s.io/pause:3.3                                                                                                 │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:28 UTC │ 22 Dec 25 00:28 UTC │
	│ cache   │ functional-973657 cache add registry.k8s.io/pause:latest                                                                                              │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:28 UTC │ 22 Dec 25 00:28 UTC │
	│ cache   │ functional-973657 cache add minikube-local-cache-test:functional-973657                                                                               │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:28 UTC │ 22 Dec 25 00:28 UTC │
	│ cache   │ functional-973657 cache delete minikube-local-cache-test:functional-973657                                                                            │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:28 UTC │ 22 Dec 25 00:28 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                      │ minikube          │ jenkins │ v1.37.0 │ 22 Dec 25 00:28 UTC │ 22 Dec 25 00:28 UTC │
	│ cache   │ list                                                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 22 Dec 25 00:28 UTC │ 22 Dec 25 00:28 UTC │
	│ ssh     │ functional-973657 ssh sudo crictl images                                                                                                              │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:28 UTC │ 22 Dec 25 00:28 UTC │
	│ ssh     │ functional-973657 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                    │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:28 UTC │ 22 Dec 25 00:28 UTC │
	│ ssh     │ functional-973657 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                               │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:28 UTC │                     │
	│ cache   │ functional-973657 cache reload                                                                                                                        │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:28 UTC │ 22 Dec 25 00:28 UTC │
	│ ssh     │ functional-973657 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                               │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:28 UTC │ 22 Dec 25 00:28 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                      │ minikube          │ jenkins │ v1.37.0 │ 22 Dec 25 00:28 UTC │ 22 Dec 25 00:28 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                   │ minikube          │ jenkins │ v1.37.0 │ 22 Dec 25 00:28 UTC │ 22 Dec 25 00:28 UTC │
	│ kubectl │ functional-973657 kubectl -- --context functional-973657 get pods                                                                                     │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:28 UTC │                     │
	│ start   │ -p functional-973657 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                              │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:28 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/22 00:28:37
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1222 00:28:37.451822 1446402 out.go:360] Setting OutFile to fd 1 ...
	I1222 00:28:37.451933 1446402 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:28:37.451942 1446402 out.go:374] Setting ErrFile to fd 2...
	I1222 00:28:37.451946 1446402 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:28:37.452197 1446402 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1395000/.minikube/bin
	I1222 00:28:37.453530 1446402 out.go:368] Setting JSON to false
	I1222 00:28:37.454369 1446402 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":112270,"bootTime":1766251047,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1222 00:28:37.454418 1446402 start.go:143] virtualization:  
	I1222 00:28:37.457786 1446402 out.go:179] * [functional-973657] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1222 00:28:37.461618 1446402 out.go:179]   - MINIKUBE_LOCATION=22179
	I1222 00:28:37.461721 1446402 notify.go:221] Checking for updates...
	I1222 00:28:37.467381 1446402 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 00:28:37.470438 1446402 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-1395000/kubeconfig
	I1222 00:28:37.473311 1446402 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1395000/.minikube
	I1222 00:28:37.476105 1446402 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1222 00:28:37.479015 1446402 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 00:28:37.482344 1446402 config.go:182] Loaded profile config "functional-973657": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1222 00:28:37.482442 1446402 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 00:28:37.509513 1446402 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1222 00:28:37.509620 1446402 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 00:28:37.577428 1446402 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-22 00:28:37.567598413 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 00:28:37.577529 1446402 docker.go:319] overlay module found
	I1222 00:28:37.580701 1446402 out.go:179] * Using the docker driver based on existing profile
	I1222 00:28:37.583433 1446402 start.go:309] selected driver: docker
	I1222 00:28:37.583443 1446402 start.go:928] validating driver "docker" against &{Name:functional-973657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-973657 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableC
oreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 00:28:37.583549 1446402 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 00:28:37.583656 1446402 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 00:28:37.637869 1446402 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-22 00:28:37.628834862 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 00:28:37.638333 1446402 start_flags.go:995] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1222 00:28:37.638357 1446402 cni.go:84] Creating CNI manager for ""
	I1222 00:28:37.638411 1446402 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1222 00:28:37.638452 1446402 start.go:353] cluster config:
	{Name:functional-973657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-973657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCo
reDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 00:28:37.641536 1446402 out.go:179] * Starting "functional-973657" primary control-plane node in "functional-973657" cluster
	I1222 00:28:37.644340 1446402 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1222 00:28:37.647258 1446402 out.go:179] * Pulling base image v0.0.48-1766219634-22260 ...
	I1222 00:28:37.650255 1446402 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1222 00:28:37.650391 1446402 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1222 00:28:37.650410 1446402 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4
	I1222 00:28:37.650417 1446402 cache.go:65] Caching tarball of preloaded images
	I1222 00:28:37.650491 1446402 preload.go:251] Found /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1222 00:28:37.650499 1446402 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on containerd
	I1222 00:28:37.650609 1446402 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/config.json ...
	I1222 00:28:37.670527 1446402 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon, skipping pull
	I1222 00:28:37.670540 1446402 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 exists in daemon, skipping load
	I1222 00:28:37.670559 1446402 cache.go:243] Successfully downloaded all kic artifacts
	I1222 00:28:37.670589 1446402 start.go:360] acquireMachinesLock for functional-973657: {Name:mk23c5c2a3abc92310900db50002bad061b76c2b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 00:28:37.670659 1446402 start.go:364] duration metric: took 50.988µs to acquireMachinesLock for "functional-973657"
	I1222 00:28:37.670679 1446402 start.go:96] Skipping create...Using existing machine configuration
	I1222 00:28:37.670683 1446402 fix.go:54] fixHost starting: 
	I1222 00:28:37.670937 1446402 cli_runner.go:164] Run: docker container inspect functional-973657 --format={{.State.Status}}
	I1222 00:28:37.688276 1446402 fix.go:112] recreateIfNeeded on functional-973657: state=Running err=<nil>
	W1222 00:28:37.688299 1446402 fix.go:138] unexpected machine state, will restart: <nil>
	I1222 00:28:37.691627 1446402 out.go:252] * Updating the running docker "functional-973657" container ...
	I1222 00:28:37.691654 1446402 machine.go:94] provisionDockerMachine start ...
	I1222 00:28:37.691736 1446402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
	I1222 00:28:37.709165 1446402 main.go:144] libmachine: Using SSH client type: native
	I1222 00:28:37.709504 1446402 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38390 <nil> <nil>}
	I1222 00:28:37.709511 1446402 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 00:28:37.842221 1446402 main.go:144] libmachine: SSH cmd err, output: <nil>: functional-973657
	
	I1222 00:28:37.842236 1446402 ubuntu.go:182] provisioning hostname "functional-973657"
	I1222 00:28:37.842299 1446402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
	I1222 00:28:37.861944 1446402 main.go:144] libmachine: Using SSH client type: native
	I1222 00:28:37.862401 1446402 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38390 <nil> <nil>}
	I1222 00:28:37.862411 1446402 main.go:144] libmachine: About to run SSH command:
	sudo hostname functional-973657 && echo "functional-973657" | sudo tee /etc/hostname
	I1222 00:28:38.004653 1446402 main.go:144] libmachine: SSH cmd err, output: <nil>: functional-973657
	
	I1222 00:28:38.004757 1446402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
	I1222 00:28:38.029552 1446402 main.go:144] libmachine: Using SSH client type: native
	I1222 00:28:38.029903 1446402 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38390 <nil> <nil>}
	I1222 00:28:38.029921 1446402 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-973657' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-973657/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-973657' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1222 00:28:38.166540 1446402 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 00:28:38.166558 1446402 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22179-1395000/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-1395000/.minikube}
	I1222 00:28:38.166588 1446402 ubuntu.go:190] setting up certificates
	I1222 00:28:38.166605 1446402 provision.go:84] configureAuth start
	I1222 00:28:38.166666 1446402 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-973657
	I1222 00:28:38.184810 1446402 provision.go:143] copyHostCerts
	I1222 00:28:38.184868 1446402 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1395000/.minikube/cert.pem, removing ...
	I1222 00:28:38.184883 1446402 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1395000/.minikube/cert.pem
	I1222 00:28:38.184958 1446402 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-1395000/.minikube/cert.pem (1123 bytes)
	I1222 00:28:38.185063 1446402 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1395000/.minikube/key.pem, removing ...
	I1222 00:28:38.185068 1446402 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1395000/.minikube/key.pem
	I1222 00:28:38.185094 1446402 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-1395000/.minikube/key.pem (1679 bytes)
	I1222 00:28:38.185151 1446402 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.pem, removing ...
	I1222 00:28:38.185154 1446402 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.pem
	I1222 00:28:38.185176 1446402 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.pem (1082 bytes)
	I1222 00:28:38.185228 1446402 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca-key.pem org=jenkins.functional-973657 san=[127.0.0.1 192.168.49.2 functional-973657 localhost minikube]
	I1222 00:28:38.572282 1446402 provision.go:177] copyRemoteCerts
	I1222 00:28:38.572338 1446402 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1222 00:28:38.572378 1446402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
	I1222 00:28:38.590440 1446402 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38390 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/functional-973657/id_rsa Username:docker}
	I1222 00:28:38.686182 1446402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1222 00:28:38.704460 1446402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1222 00:28:38.721777 1446402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1222 00:28:38.739280 1446402 provision.go:87] duration metric: took 572.652959ms to configureAuth
	I1222 00:28:38.739299 1446402 ubuntu.go:206] setting minikube options for container-runtime
	I1222 00:28:38.739484 1446402 config.go:182] Loaded profile config "functional-973657": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1222 00:28:38.739490 1446402 machine.go:97] duration metric: took 1.047830613s to provisionDockerMachine
	I1222 00:28:38.739496 1446402 start.go:293] postStartSetup for "functional-973657" (driver="docker")
	I1222 00:28:38.739506 1446402 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1222 00:28:38.739568 1446402 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1222 00:28:38.739605 1446402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
	I1222 00:28:38.761201 1446402 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38390 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/functional-973657/id_rsa Username:docker}
	I1222 00:28:38.864350 1446402 ssh_runner.go:195] Run: cat /etc/os-release
	I1222 00:28:38.868359 1446402 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1222 00:28:38.868379 1446402 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1222 00:28:38.868390 1446402 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1395000/.minikube/addons for local assets ...
	I1222 00:28:38.868447 1446402 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1395000/.minikube/files for local assets ...
	I1222 00:28:38.868524 1446402 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem -> 13968642.pem in /etc/ssl/certs
	I1222 00:28:38.868598 1446402 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/test/nested/copy/1396864/hosts -> hosts in /etc/test/nested/copy/1396864
	I1222 00:28:38.868641 1446402 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1396864
	I1222 00:28:38.878975 1446402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem --> /etc/ssl/certs/13968642.pem (1708 bytes)
	I1222 00:28:38.897171 1446402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/test/nested/copy/1396864/hosts --> /etc/test/nested/copy/1396864/hosts (40 bytes)
	I1222 00:28:38.915159 1446402 start.go:296] duration metric: took 175.648245ms for postStartSetup
	I1222 00:28:38.915247 1446402 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 00:28:38.915286 1446402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
	I1222 00:28:38.933740 1446402 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38390 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/functional-973657/id_rsa Username:docker}
	I1222 00:28:39.031561 1446402 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1222 00:28:39.036720 1446402 fix.go:56] duration metric: took 1.366028879s for fixHost
	I1222 00:28:39.036736 1446402 start.go:83] releasing machines lock for "functional-973657", held for 1.366069585s
	I1222 00:28:39.036807 1446402 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-973657
	I1222 00:28:39.056063 1446402 ssh_runner.go:195] Run: cat /version.json
	I1222 00:28:39.056131 1446402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
	I1222 00:28:39.056209 1446402 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1222 00:28:39.056284 1446402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
	I1222 00:28:39.084466 1446402 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38390 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/functional-973657/id_rsa Username:docker}
	I1222 00:28:39.086214 1446402 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38390 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/functional-973657/id_rsa Username:docker}
	I1222 00:28:39.182487 1446402 ssh_runner.go:195] Run: systemctl --version
	I1222 00:28:39.277379 1446402 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1222 00:28:39.281860 1446402 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1222 00:28:39.281935 1446402 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1222 00:28:39.290006 1446402 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1222 00:28:39.290021 1446402 start.go:496] detecting cgroup driver to use...
	I1222 00:28:39.290053 1446402 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 00:28:39.290134 1446402 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1222 00:28:39.305829 1446402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1222 00:28:39.319320 1446402 docker.go:218] disabling cri-docker service (if available) ...
	I1222 00:28:39.319374 1446402 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1222 00:28:39.335346 1446402 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1222 00:28:39.349145 1446402 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1222 00:28:39.473478 1446402 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1222 00:28:39.618008 1446402 docker.go:234] disabling docker service ...
	I1222 00:28:39.618090 1446402 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1222 00:28:39.634656 1446402 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1222 00:28:39.647677 1446402 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1222 00:28:39.771400 1446402 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1222 00:28:39.894302 1446402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1222 00:28:39.907014 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 00:28:39.920771 1446402 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1222 00:28:39.929451 1446402 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1222 00:28:39.938829 1446402 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1222 00:28:39.938905 1446402 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1222 00:28:39.947569 1446402 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1222 00:28:39.956482 1446402 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1222 00:28:39.965074 1446402 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1222 00:28:39.973881 1446402 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1222 00:28:39.981977 1446402 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1222 00:28:39.990962 1446402 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1222 00:28:39.999843 1446402 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1222 00:28:40.013571 1446402 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1222 00:28:40.024830 1446402 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1222 00:28:40.034498 1446402 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 00:28:40.154100 1446402 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1222 00:28:40.334682 1446402 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1222 00:28:40.334744 1446402 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1222 00:28:40.338667 1446402 start.go:564] Will wait 60s for crictl version
	I1222 00:28:40.338723 1446402 ssh_runner.go:195] Run: which crictl
	I1222 00:28:40.342335 1446402 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1222 00:28:40.367245 1446402 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.1
	RuntimeApiVersion:  v1
	I1222 00:28:40.367308 1446402 ssh_runner.go:195] Run: containerd --version
	I1222 00:28:40.389012 1446402 ssh_runner.go:195] Run: containerd --version
	I1222 00:28:40.418027 1446402 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.1 ...
	I1222 00:28:40.420898 1446402 cli_runner.go:164] Run: docker network inspect functional-973657 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 00:28:40.437638 1446402 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1222 00:28:40.444854 1446402 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1222 00:28:40.447771 1446402 kubeadm.go:884] updating cluster {Name:functional-973657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-973657 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1222 00:28:40.447915 1446402 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1222 00:28:40.447997 1446402 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 00:28:40.473338 1446402 containerd.go:627] all images are preloaded for containerd runtime.
	I1222 00:28:40.473351 1446402 containerd.go:534] Images already preloaded, skipping extraction
	I1222 00:28:40.473409 1446402 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 00:28:40.498366 1446402 containerd.go:627] all images are preloaded for containerd runtime.
	I1222 00:28:40.498377 1446402 cache_images.go:86] Images are preloaded, skipping loading
	I1222 00:28:40.498383 1446402 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 containerd true true} ...
	I1222 00:28:40.498490 1446402 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-973657 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-973657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1222 00:28:40.498554 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
	I1222 00:28:40.524507 1446402 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1222 00:28:40.524524 1446402 cni.go:84] Creating CNI manager for ""
	I1222 00:28:40.524533 1446402 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1222 00:28:40.524546 1446402 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1222 00:28:40.524568 1446402 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-973657 NodeName:functional-973657 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false Kubelet
ConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1222 00:28:40.524688 1446402 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-973657"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1222 00:28:40.524764 1446402 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1222 00:28:40.533361 1446402 binaries.go:51] Found k8s binaries, skipping transfer
	I1222 00:28:40.533424 1446402 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1222 00:28:40.541244 1446402 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1222 00:28:40.555755 1446402 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1222 00:28:40.568267 1446402 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2085 bytes)
	I1222 00:28:40.581122 1446402 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1222 00:28:40.585058 1446402 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 00:28:40.703120 1446402 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 00:28:40.989767 1446402 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657 for IP: 192.168.49.2
	I1222 00:28:40.989777 1446402 certs.go:195] generating shared ca certs ...
	I1222 00:28:40.989791 1446402 certs.go:227] acquiring lock for ca certs: {Name:mk4e1172e73c8d9b926824a39d7e920772302ed7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:28:40.989935 1446402 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.key
	I1222 00:28:40.989982 1446402 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.key
	I1222 00:28:40.989987 1446402 certs.go:257] generating profile certs ...
	I1222 00:28:40.990067 1446402 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/client.key
	I1222 00:28:40.990138 1446402 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/apiserver.key.ec70d081
	I1222 00:28:40.990175 1446402 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/proxy-client.key
	I1222 00:28:40.990291 1446402 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/1396864.pem (1338 bytes)
	W1222 00:28:40.990321 1446402 certs.go:480] ignoring /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/1396864_empty.pem, impossibly tiny 0 bytes
	I1222 00:28:40.990328 1446402 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca-key.pem (1675 bytes)
	I1222 00:28:40.990354 1446402 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem (1082 bytes)
	I1222 00:28:40.990377 1446402 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem (1123 bytes)
	I1222 00:28:40.990400 1446402 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/key.pem (1679 bytes)
	I1222 00:28:40.990449 1446402 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem (1708 bytes)
	I1222 00:28:40.991096 1446402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1222 00:28:41.014750 1446402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1222 00:28:41.036655 1446402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1222 00:28:41.057901 1446402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1222 00:28:41.075308 1446402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1222 00:28:41.092360 1446402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1222 00:28:41.110513 1446402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1222 00:28:41.128091 1446402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1222 00:28:41.145457 1446402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1222 00:28:41.163271 1446402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/1396864.pem --> /usr/share/ca-certificates/1396864.pem (1338 bytes)
	I1222 00:28:41.181040 1446402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem --> /usr/share/ca-certificates/13968642.pem (1708 bytes)
	I1222 00:28:41.199219 1446402 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1222 00:28:41.211792 1446402 ssh_runner.go:195] Run: openssl version
	I1222 00:28:41.217908 1446402 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:28:41.225276 1446402 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1222 00:28:41.232519 1446402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:28:41.236312 1446402 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 00:04 /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:28:41.236370 1446402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:28:41.277548 1446402 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1222 00:28:41.285110 1446402 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1396864.pem
	I1222 00:28:41.292519 1446402 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1396864.pem /etc/ssl/certs/1396864.pem
	I1222 00:28:41.300133 1446402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1396864.pem
	I1222 00:28:41.304025 1446402 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 00:13 /usr/share/ca-certificates/1396864.pem
	I1222 00:28:41.304090 1446402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1396864.pem
	I1222 00:28:41.345481 1446402 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1222 00:28:41.353129 1446402 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/13968642.pem
	I1222 00:28:41.360704 1446402 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/13968642.pem /etc/ssl/certs/13968642.pem
	I1222 00:28:41.368364 1446402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13968642.pem
	I1222 00:28:41.372067 1446402 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 00:13 /usr/share/ca-certificates/13968642.pem
	I1222 00:28:41.372146 1446402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13968642.pem
	I1222 00:28:41.413233 1446402 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1222 00:28:41.421216 1446402 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 00:28:41.424941 1446402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1222 00:28:41.465845 1446402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1222 00:28:41.509256 1446402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1222 00:28:41.550176 1446402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1222 00:28:41.591240 1446402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1222 00:28:41.636957 1446402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1222 00:28:41.677583 1446402 kubeadm.go:401] StartCluster: {Name:functional-973657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-973657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 00:28:41.677666 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1222 00:28:41.677732 1446402 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 00:28:41.707257 1446402 cri.go:96] found id: ""
	I1222 00:28:41.707323 1446402 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1222 00:28:41.715403 1446402 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1222 00:28:41.715412 1446402 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1222 00:28:41.715487 1446402 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1222 00:28:41.722811 1446402 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1222 00:28:41.723316 1446402 kubeconfig.go:125] found "functional-973657" server: "https://192.168.49.2:8441"
	I1222 00:28:41.724615 1446402 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1222 00:28:41.732758 1446402 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-22 00:14:06.897851329 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-22 00:28:40.577260246 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1222 00:28:41.732777 1446402 kubeadm.go:1161] stopping kube-system containers ...
	I1222 00:28:41.732788 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1222 00:28:41.732853 1446402 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 00:28:41.777317 1446402 cri.go:96] found id: ""
	I1222 00:28:41.777381 1446402 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1222 00:28:41.795672 1446402 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 00:28:41.803787 1446402 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Dec 22 00:18 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec 22 00:18 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5676 Dec 22 00:18 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Dec 22 00:18 /etc/kubernetes/scheduler.conf
	
	I1222 00:28:41.803861 1446402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1222 00:28:41.811861 1446402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1222 00:28:41.819685 1446402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1222 00:28:41.819741 1446402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 00:28:41.827761 1446402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1222 00:28:41.835493 1446402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1222 00:28:41.835553 1446402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 00:28:41.843556 1446402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1222 00:28:41.851531 1446402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1222 00:28:41.851587 1446402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 00:28:41.860145 1446402 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1222 00:28:41.868219 1446402 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1222 00:28:41.913117 1446402 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1222 00:28:43.003962 1446402 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.090816856s)
	I1222 00:28:43.004040 1446402 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1222 00:28:43.212066 1446402 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1222 00:28:43.273727 1446402 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1222 00:28:43.319285 1446402 api_server.go:52] waiting for apiserver process to appear ...
	I1222 00:28:43.319357 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:43.819515 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:44.319574 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:44.820396 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:45.320627 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:45.819505 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:46.320284 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:46.820238 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:47.320289 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:47.819431 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:48.319438 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:48.820203 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:49.320163 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:49.820253 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:50.320340 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:50.820353 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:51.320143 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:51.819557 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:52.319533 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:52.819532 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:53.319872 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:53.819550 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:54.320283 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:54.820042 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:55.319836 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:55.820287 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:56.320324 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:56.819506 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:57.320256 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:57.820518 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:58.320250 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:58.819713 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:59.319563 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:59.820373 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:00.320250 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:00.819558 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:01.320363 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:01.820455 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:02.320264 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:02.820241 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:03.320188 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:03.820211 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:04.319540 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:04.819438 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:05.320247 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:05.820436 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:06.320370 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:06.819539 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:07.319751 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:07.820258 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:08.319764 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:08.820469 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:09.319565 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:09.819562 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:10.319521 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:10.819559 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:11.319690 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:11.819773 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:12.319579 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:12.820346 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:13.320217 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:13.820210 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:14.320172 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:14.819550 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:15.319430 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:15.820196 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:16.319448 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:16.819507 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:17.320526 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:17.819522 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:18.319482 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:18.820476 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:19.319544 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:19.820495 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:20.319558 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:20.820340 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:21.320236 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:21.820518 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:22.319699 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:22.819573 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:23.319567 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:23.819533 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:24.319887 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:24.819624 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:25.320279 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:25.820331 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:26.320411 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:26.819541 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:27.320442 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:27.819562 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:28.319550 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:28.820464 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:29.320504 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:29.819508 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:30.319443 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:30.819528 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:31.319503 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:31.819888 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:32.319676 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:32.819521 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:33.319477 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:33.819820 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:34.319851 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:34.819577 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:35.320381 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:35.820397 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:36.320202 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:36.820411 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:37.319449 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:37.819535 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:38.319499 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:38.820465 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:39.319496 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:39.819562 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:40.319552 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:40.819553 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:41.319757 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:41.820402 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:42.319587 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:42.820218 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:43.320359 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:29:43.320440 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:29:43.346534 1446402 cri.go:96] found id: ""
	I1222 00:29:43.346547 1446402 logs.go:282] 0 containers: []
	W1222 00:29:43.346555 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:29:43.346560 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:29:43.346649 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:29:43.373797 1446402 cri.go:96] found id: ""
	I1222 00:29:43.373813 1446402 logs.go:282] 0 containers: []
	W1222 00:29:43.373820 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:29:43.373825 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:29:43.373887 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:29:43.399270 1446402 cri.go:96] found id: ""
	I1222 00:29:43.399284 1446402 logs.go:282] 0 containers: []
	W1222 00:29:43.399291 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:29:43.399296 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:29:43.399363 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:29:43.423840 1446402 cri.go:96] found id: ""
	I1222 00:29:43.423855 1446402 logs.go:282] 0 containers: []
	W1222 00:29:43.423862 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:29:43.423868 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:29:43.423926 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:29:43.447537 1446402 cri.go:96] found id: ""
	I1222 00:29:43.447551 1446402 logs.go:282] 0 containers: []
	W1222 00:29:43.447558 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:29:43.447564 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:29:43.447626 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:29:43.474001 1446402 cri.go:96] found id: ""
	I1222 00:29:43.474016 1446402 logs.go:282] 0 containers: []
	W1222 00:29:43.474024 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:29:43.474029 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:29:43.474123 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:29:43.502707 1446402 cri.go:96] found id: ""
	I1222 00:29:43.502721 1446402 logs.go:282] 0 containers: []
	W1222 00:29:43.502728 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:29:43.502736 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:29:43.502746 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:29:43.560014 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:29:43.560034 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:29:43.575973 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:29:43.575990 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:29:43.644984 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:29:43.636222   10809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:43.636633   10809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:43.638265   10809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:43.638673   10809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:43.640308   10809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:29:43.636222   10809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:43.636633   10809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:43.638265   10809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:43.638673   10809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:43.640308   10809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:29:43.644996 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:29:43.645007 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:29:43.711821 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:29:43.711841 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:29:46.243876 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:46.255639 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:29:46.255701 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:29:46.285594 1446402 cri.go:96] found id: ""
	I1222 00:29:46.285608 1446402 logs.go:282] 0 containers: []
	W1222 00:29:46.285615 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:29:46.285621 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:29:46.285685 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:29:46.313654 1446402 cri.go:96] found id: ""
	I1222 00:29:46.313669 1446402 logs.go:282] 0 containers: []
	W1222 00:29:46.313676 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:29:46.313694 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:29:46.313755 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:29:46.339799 1446402 cri.go:96] found id: ""
	I1222 00:29:46.339815 1446402 logs.go:282] 0 containers: []
	W1222 00:29:46.339822 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:29:46.339828 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:29:46.339891 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:29:46.365156 1446402 cri.go:96] found id: ""
	I1222 00:29:46.365184 1446402 logs.go:282] 0 containers: []
	W1222 00:29:46.365192 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:29:46.365198 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:29:46.365265 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:29:46.394145 1446402 cri.go:96] found id: ""
	I1222 00:29:46.394159 1446402 logs.go:282] 0 containers: []
	W1222 00:29:46.394167 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:29:46.394172 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:29:46.394233 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:29:46.418776 1446402 cri.go:96] found id: ""
	I1222 00:29:46.418790 1446402 logs.go:282] 0 containers: []
	W1222 00:29:46.418797 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:29:46.418803 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:29:46.418864 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:29:46.442806 1446402 cri.go:96] found id: ""
	I1222 00:29:46.442820 1446402 logs.go:282] 0 containers: []
	W1222 00:29:46.442828 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:29:46.442841 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:29:46.442851 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:29:46.499137 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:29:46.499157 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:29:46.515023 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:29:46.515038 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:29:46.583664 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:29:46.574797   10913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:46.575467   10913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:46.577211   10913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:46.577813   10913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:46.579510   10913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:29:46.574797   10913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:46.575467   10913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:46.577211   10913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:46.577813   10913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:46.579510   10913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:29:46.583675 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:29:46.583687 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:29:46.647550 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:29:46.647569 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:29:49.182538 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:49.192713 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:29:49.192773 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:29:49.216898 1446402 cri.go:96] found id: ""
	I1222 00:29:49.216912 1446402 logs.go:282] 0 containers: []
	W1222 00:29:49.216919 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:29:49.216924 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:29:49.216980 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:29:49.249605 1446402 cri.go:96] found id: ""
	I1222 00:29:49.249618 1446402 logs.go:282] 0 containers: []
	W1222 00:29:49.249626 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:29:49.249631 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:29:49.249690 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:29:49.280524 1446402 cri.go:96] found id: ""
	I1222 00:29:49.280539 1446402 logs.go:282] 0 containers: []
	W1222 00:29:49.280546 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:29:49.280552 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:29:49.280611 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:29:49.311301 1446402 cri.go:96] found id: ""
	I1222 00:29:49.311315 1446402 logs.go:282] 0 containers: []
	W1222 00:29:49.311323 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:29:49.311327 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:29:49.311385 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:29:49.336538 1446402 cri.go:96] found id: ""
	I1222 00:29:49.336551 1446402 logs.go:282] 0 containers: []
	W1222 00:29:49.336559 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:29:49.336564 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:29:49.336624 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:29:49.364232 1446402 cri.go:96] found id: ""
	I1222 00:29:49.364247 1446402 logs.go:282] 0 containers: []
	W1222 00:29:49.364256 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:29:49.364262 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:29:49.364326 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:29:49.388613 1446402 cri.go:96] found id: ""
	I1222 00:29:49.388638 1446402 logs.go:282] 0 containers: []
	W1222 00:29:49.388646 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:29:49.388654 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:29:49.388664 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:29:49.451680 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:29:49.443682   11010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:49.444280   11010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:49.445741   11010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:49.446220   11010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:49.447728   11010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:29:49.443682   11010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:49.444280   11010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:49.445741   11010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:49.446220   11010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:49.447728   11010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:29:49.451690 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:29:49.451701 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:29:49.514558 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:29:49.514577 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:29:49.543077 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:29:49.543095 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:29:49.600979 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:29:49.600997 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:29:52.116977 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:52.127516 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:29:52.127578 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:29:52.154761 1446402 cri.go:96] found id: ""
	I1222 00:29:52.154783 1446402 logs.go:282] 0 containers: []
	W1222 00:29:52.154790 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:29:52.154796 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:29:52.154857 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:29:52.180288 1446402 cri.go:96] found id: ""
	I1222 00:29:52.180303 1446402 logs.go:282] 0 containers: []
	W1222 00:29:52.180310 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:29:52.180316 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:29:52.180376 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:29:52.208439 1446402 cri.go:96] found id: ""
	I1222 00:29:52.208454 1446402 logs.go:282] 0 containers: []
	W1222 00:29:52.208461 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:29:52.208466 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:29:52.208527 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:29:52.233901 1446402 cri.go:96] found id: ""
	I1222 00:29:52.233914 1446402 logs.go:282] 0 containers: []
	W1222 00:29:52.233932 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:29:52.233938 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:29:52.234004 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:29:52.269797 1446402 cri.go:96] found id: ""
	I1222 00:29:52.269821 1446402 logs.go:282] 0 containers: []
	W1222 00:29:52.269829 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:29:52.269835 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:29:52.269901 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:29:52.297204 1446402 cri.go:96] found id: ""
	I1222 00:29:52.297219 1446402 logs.go:282] 0 containers: []
	W1222 00:29:52.297236 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:29:52.297242 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:29:52.297308 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:29:52.326411 1446402 cri.go:96] found id: ""
	I1222 00:29:52.326425 1446402 logs.go:282] 0 containers: []
	W1222 00:29:52.326433 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:29:52.326440 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:29:52.326450 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:29:52.387688 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:29:52.379564   11112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:52.380283   11112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:52.381856   11112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:52.382478   11112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:52.383956   11112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:29:52.379564   11112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:52.380283   11112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:52.381856   11112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:52.382478   11112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:52.383956   11112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:29:52.387700 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:29:52.387716 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:29:52.453506 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:29:52.453524 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:29:52.483252 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:29:52.483269 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:29:52.540786 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:29:52.540804 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:29:55.056509 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:55.067103 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:29:55.067178 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:29:55.093620 1446402 cri.go:96] found id: ""
	I1222 00:29:55.093649 1446402 logs.go:282] 0 containers: []
	W1222 00:29:55.093656 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:29:55.093663 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:29:55.093734 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:29:55.128411 1446402 cri.go:96] found id: ""
	I1222 00:29:55.128424 1446402 logs.go:282] 0 containers: []
	W1222 00:29:55.128432 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:29:55.128436 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:29:55.128504 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:29:55.154633 1446402 cri.go:96] found id: ""
	I1222 00:29:55.154646 1446402 logs.go:282] 0 containers: []
	W1222 00:29:55.154654 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:29:55.154659 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:29:55.154730 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:29:55.181169 1446402 cri.go:96] found id: ""
	I1222 00:29:55.181184 1446402 logs.go:282] 0 containers: []
	W1222 00:29:55.181191 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:29:55.181197 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:29:55.181256 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:29:55.206353 1446402 cri.go:96] found id: ""
	I1222 00:29:55.206367 1446402 logs.go:282] 0 containers: []
	W1222 00:29:55.206374 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:29:55.206379 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:29:55.206439 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:29:55.234930 1446402 cri.go:96] found id: ""
	I1222 00:29:55.234963 1446402 logs.go:282] 0 containers: []
	W1222 00:29:55.234971 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:29:55.234977 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:29:55.235052 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:29:55.269275 1446402 cri.go:96] found id: ""
	I1222 00:29:55.269290 1446402 logs.go:282] 0 containers: []
	W1222 00:29:55.269298 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:29:55.269306 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:29:55.269316 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:29:55.332423 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:29:55.332442 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:29:55.348393 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:29:55.348409 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:29:55.411746 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:29:55.403223   11224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:55.403831   11224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:55.405510   11224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:55.406036   11224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:55.407668   11224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:29:55.403223   11224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:55.403831   11224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:55.405510   11224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:55.406036   11224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:55.407668   11224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:29:55.411756 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:29:55.411767 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:29:55.478898 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:29:55.478918 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:29:58.007945 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:58.028590 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:29:58.028654 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:29:58.053263 1446402 cri.go:96] found id: ""
	I1222 00:29:58.053277 1446402 logs.go:282] 0 containers: []
	W1222 00:29:58.053284 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:29:58.053290 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:29:58.053349 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:29:58.078650 1446402 cri.go:96] found id: ""
	I1222 00:29:58.078664 1446402 logs.go:282] 0 containers: []
	W1222 00:29:58.078671 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:29:58.078676 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:29:58.078746 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:29:58.104284 1446402 cri.go:96] found id: ""
	I1222 00:29:58.104298 1446402 logs.go:282] 0 containers: []
	W1222 00:29:58.104305 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:29:58.104310 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:29:58.104372 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:29:58.133078 1446402 cri.go:96] found id: ""
	I1222 00:29:58.133103 1446402 logs.go:282] 0 containers: []
	W1222 00:29:58.133110 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:29:58.133116 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:29:58.133194 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:29:58.160079 1446402 cri.go:96] found id: ""
	I1222 00:29:58.160092 1446402 logs.go:282] 0 containers: []
	W1222 00:29:58.160100 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:29:58.160105 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:29:58.160209 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:29:58.184050 1446402 cri.go:96] found id: ""
	I1222 00:29:58.184070 1446402 logs.go:282] 0 containers: []
	W1222 00:29:58.184091 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:29:58.184098 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:29:58.184161 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:29:58.207826 1446402 cri.go:96] found id: ""
	I1222 00:29:58.207840 1446402 logs.go:282] 0 containers: []
	W1222 00:29:58.207847 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:29:58.207854 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:29:58.207864 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:29:58.275859 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:29:58.275886 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:29:58.308307 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:29:58.308324 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:29:58.365952 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:29:58.365971 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:29:58.381771 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:29:58.381788 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:29:58.449730 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:29:58.441326   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:58.442009   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:58.443754   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:58.444437   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:58.446028   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:29:58.441326   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:58.442009   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:58.443754   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:58.444437   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:58.446028   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:30:00.951841 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:30:00.968627 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:30:00.968704 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:30:01.017629 1446402 cri.go:96] found id: ""
	I1222 00:30:01.017648 1446402 logs.go:282] 0 containers: []
	W1222 00:30:01.017657 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:30:01.017665 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:30:01.017745 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:30:01.052801 1446402 cri.go:96] found id: ""
	I1222 00:30:01.052819 1446402 logs.go:282] 0 containers: []
	W1222 00:30:01.052829 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:30:01.052837 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:30:01.052908 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:30:01.090908 1446402 cri.go:96] found id: ""
	I1222 00:30:01.090924 1446402 logs.go:282] 0 containers: []
	W1222 00:30:01.090942 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:30:01.090949 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:30:01.091024 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:30:01.135566 1446402 cri.go:96] found id: ""
	I1222 00:30:01.135584 1446402 logs.go:282] 0 containers: []
	W1222 00:30:01.135592 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:30:01.135599 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:30:01.135681 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:30:01.183704 1446402 cri.go:96] found id: ""
	I1222 00:30:01.183720 1446402 logs.go:282] 0 containers: []
	W1222 00:30:01.183728 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:30:01.183734 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:30:01.183803 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:30:01.237284 1446402 cri.go:96] found id: ""
	I1222 00:30:01.237300 1446402 logs.go:282] 0 containers: []
	W1222 00:30:01.237315 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:30:01.237321 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:30:01.237397 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:30:01.274702 1446402 cri.go:96] found id: ""
	I1222 00:30:01.274719 1446402 logs.go:282] 0 containers: []
	W1222 00:30:01.274727 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:30:01.274735 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:30:01.274746 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:30:01.337817 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:30:01.337838 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:30:01.357916 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:30:01.357936 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:30:01.439644 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:30:01.426011   11434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:01.426767   11434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:01.428959   11434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:01.433129   11434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:01.433823   11434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:30:01.426011   11434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:01.426767   11434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:01.428959   11434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:01.433129   11434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:01.433823   11434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:30:01.439657 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:30:01.439672 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:30:01.506150 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:30:01.506173 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:30:04.047348 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:30:04.057922 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:30:04.057990 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:30:04.083599 1446402 cri.go:96] found id: ""
	I1222 00:30:04.083613 1446402 logs.go:282] 0 containers: []
	W1222 00:30:04.083620 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:30:04.083625 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:30:04.083697 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:30:04.109159 1446402 cri.go:96] found id: ""
	I1222 00:30:04.109174 1446402 logs.go:282] 0 containers: []
	W1222 00:30:04.109181 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:30:04.109186 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:30:04.109245 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:30:04.138314 1446402 cri.go:96] found id: ""
	I1222 00:30:04.138329 1446402 logs.go:282] 0 containers: []
	W1222 00:30:04.138336 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:30:04.138344 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:30:04.138405 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:30:04.164036 1446402 cri.go:96] found id: ""
	I1222 00:30:04.164051 1446402 logs.go:282] 0 containers: []
	W1222 00:30:04.164058 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:30:04.164078 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:30:04.164143 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:30:04.189566 1446402 cri.go:96] found id: ""
	I1222 00:30:04.189581 1446402 logs.go:282] 0 containers: []
	W1222 00:30:04.189588 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:30:04.189593 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:30:04.189657 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:30:04.214647 1446402 cri.go:96] found id: ""
	I1222 00:30:04.214662 1446402 logs.go:282] 0 containers: []
	W1222 00:30:04.214669 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:30:04.214675 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:30:04.214746 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:30:04.243657 1446402 cri.go:96] found id: ""
	I1222 00:30:04.243672 1446402 logs.go:282] 0 containers: []
	W1222 00:30:04.243680 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:30:04.243687 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:30:04.243700 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:30:04.312395 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:30:04.312414 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:30:04.342163 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:30:04.342181 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:30:04.399936 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:30:04.399958 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:30:04.416847 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:30:04.416863 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:30:04.482794 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:30:04.473925   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:04.474511   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:04.476052   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:04.476651   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:04.478320   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:30:04.473925   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:04.474511   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:04.476052   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:04.476651   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:04.478320   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:30:06.983066 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:30:06.993652 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:30:06.993715 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:30:07.023165 1446402 cri.go:96] found id: ""
	I1222 00:30:07.023180 1446402 logs.go:282] 0 containers: []
	W1222 00:30:07.023187 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:30:07.023192 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:30:07.023255 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:30:07.049538 1446402 cri.go:96] found id: ""
	I1222 00:30:07.049552 1446402 logs.go:282] 0 containers: []
	W1222 00:30:07.049560 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:30:07.049565 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:30:07.049629 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:30:07.075257 1446402 cri.go:96] found id: ""
	I1222 00:30:07.075277 1446402 logs.go:282] 0 containers: []
	W1222 00:30:07.075284 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:30:07.075289 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:30:07.075351 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:30:07.101441 1446402 cri.go:96] found id: ""
	I1222 00:30:07.101456 1446402 logs.go:282] 0 containers: []
	W1222 00:30:07.101463 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:30:07.101469 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:30:07.101532 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:30:07.128366 1446402 cri.go:96] found id: ""
	I1222 00:30:07.128380 1446402 logs.go:282] 0 containers: []
	W1222 00:30:07.128392 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:30:07.128398 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:30:07.128460 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:30:07.152988 1446402 cri.go:96] found id: ""
	I1222 00:30:07.153005 1446402 logs.go:282] 0 containers: []
	W1222 00:30:07.153013 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:30:07.153019 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:30:07.153079 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:30:07.178387 1446402 cri.go:96] found id: ""
	I1222 00:30:07.178401 1446402 logs.go:282] 0 containers: []
	W1222 00:30:07.178409 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:30:07.178428 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:30:07.178440 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:30:07.194549 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:30:07.194566 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:30:07.271952 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:30:07.261354   11636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:07.262317   11636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:07.264582   11636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:07.265158   11636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:07.267664   11636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:30:07.261354   11636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:07.262317   11636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:07.264582   11636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:07.265158   11636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:07.267664   11636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:30:07.271961 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:30:07.271973 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:30:07.346114 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:30:07.346134 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:30:07.373577 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:30:07.373593 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:30:09.930306 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:30:09.940949 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:30:09.941017 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:30:09.968763 1446402 cri.go:96] found id: ""
	I1222 00:30:09.968777 1446402 logs.go:282] 0 containers: []
	W1222 00:30:09.968784 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:30:09.968789 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:30:09.968848 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:30:09.992991 1446402 cri.go:96] found id: ""
	I1222 00:30:09.993006 1446402 logs.go:282] 0 containers: []
	W1222 00:30:09.993013 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:30:09.993018 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:30:09.993082 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:30:10.029788 1446402 cri.go:96] found id: ""
	I1222 00:30:10.029804 1446402 logs.go:282] 0 containers: []
	W1222 00:30:10.029811 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:30:10.029817 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:30:10.029886 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:30:10.067395 1446402 cri.go:96] found id: ""
	I1222 00:30:10.067410 1446402 logs.go:282] 0 containers: []
	W1222 00:30:10.067416 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:30:10.067422 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:30:10.067499 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:30:10.095007 1446402 cri.go:96] found id: ""
	I1222 00:30:10.095022 1446402 logs.go:282] 0 containers: []
	W1222 00:30:10.095030 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:30:10.095036 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:30:10.095101 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:30:10.123474 1446402 cri.go:96] found id: ""
	I1222 00:30:10.123495 1446402 logs.go:282] 0 containers: []
	W1222 00:30:10.123503 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:30:10.123509 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:30:10.123573 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:30:10.153420 1446402 cri.go:96] found id: ""
	I1222 00:30:10.153435 1446402 logs.go:282] 0 containers: []
	W1222 00:30:10.153441 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:30:10.153448 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:30:10.153459 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:30:10.210172 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:30:10.210193 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:30:10.226706 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:30:10.226725 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:30:10.315292 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:30:10.305028   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:10.306157   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:10.307886   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:10.308616   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:10.310308   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:30:10.305028   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:10.306157   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:10.307886   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:10.308616   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:10.310308   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:30:10.315303 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:30:10.315313 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:30:10.383703 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:30:10.383725 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:30:12.913638 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:30:12.925302 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:30:12.925369 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:30:12.950905 1446402 cri.go:96] found id: ""
	I1222 00:30:12.950919 1446402 logs.go:282] 0 containers: []
	W1222 00:30:12.950930 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:30:12.950935 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:30:12.950996 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:30:12.975557 1446402 cri.go:96] found id: ""
	I1222 00:30:12.975587 1446402 logs.go:282] 0 containers: []
	W1222 00:30:12.975596 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:30:12.975609 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:30:12.975679 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:30:13.000143 1446402 cri.go:96] found id: ""
	I1222 00:30:13.000157 1446402 logs.go:282] 0 containers: []
	W1222 00:30:13.000165 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:30:13.000171 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:30:13.000234 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:30:13.026672 1446402 cri.go:96] found id: ""
	I1222 00:30:13.026694 1446402 logs.go:282] 0 containers: []
	W1222 00:30:13.026702 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:30:13.026709 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:30:13.026773 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:30:13.055830 1446402 cri.go:96] found id: ""
	I1222 00:30:13.055846 1446402 logs.go:282] 0 containers: []
	W1222 00:30:13.055854 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:30:13.055859 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:30:13.055923 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:30:13.082359 1446402 cri.go:96] found id: ""
	I1222 00:30:13.082374 1446402 logs.go:282] 0 containers: []
	W1222 00:30:13.082382 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:30:13.082387 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:30:13.082449 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:30:13.108828 1446402 cri.go:96] found id: ""
	I1222 00:30:13.108842 1446402 logs.go:282] 0 containers: []
	W1222 00:30:13.108850 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:30:13.108858 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:30:13.108869 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:30:13.165350 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:30:13.165373 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:30:13.181480 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:30:13.181497 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:30:13.246107 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:30:13.236679   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:13.237365   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:13.239274   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:13.239720   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:13.241214   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:30:13.236679   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:13.237365   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:13.239274   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:13.239720   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:13.241214   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:30:13.246118 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:30:13.246128 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:30:13.320470 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:30:13.320490 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:30:15.851791 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:30:15.862330 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:30:15.862391 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:30:15.890336 1446402 cri.go:96] found id: ""
	I1222 00:30:15.890350 1446402 logs.go:282] 0 containers: []
	W1222 00:30:15.890358 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:30:15.890364 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:30:15.890428 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:30:15.917647 1446402 cri.go:96] found id: ""
	I1222 00:30:15.917662 1446402 logs.go:282] 0 containers: []
	W1222 00:30:15.917670 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:30:15.917675 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:30:15.917737 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:30:15.948052 1446402 cri.go:96] found id: ""
	I1222 00:30:15.948074 1446402 logs.go:282] 0 containers: []
	W1222 00:30:15.948083 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:30:15.948089 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:30:15.948155 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:30:15.973080 1446402 cri.go:96] found id: ""
	I1222 00:30:15.973094 1446402 logs.go:282] 0 containers: []
	W1222 00:30:15.973101 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:30:15.973107 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:30:15.973167 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:30:15.998935 1446402 cri.go:96] found id: ""
	I1222 00:30:15.998950 1446402 logs.go:282] 0 containers: []
	W1222 00:30:15.998957 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:30:15.998962 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:30:15.999025 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:30:16.027611 1446402 cri.go:96] found id: ""
	I1222 00:30:16.027628 1446402 logs.go:282] 0 containers: []
	W1222 00:30:16.027638 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:30:16.027644 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:30:16.027727 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:30:16.053780 1446402 cri.go:96] found id: ""
	I1222 00:30:16.053794 1446402 logs.go:282] 0 containers: []
	W1222 00:30:16.053802 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:30:16.053809 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:30:16.053823 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:30:16.124007 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:30:16.115168   11946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:16.115642   11946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:16.117413   11946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:16.118187   11946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:16.119870   11946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:30:16.115168   11946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:16.115642   11946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:16.117413   11946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:16.118187   11946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:16.119870   11946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:30:16.124030 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:30:16.124042 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:30:16.186716 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:30:16.186736 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:30:16.216494 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:30:16.216511 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:30:16.279107 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:30:16.279127 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:30:18.798677 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:30:18.809493 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:30:18.809564 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:30:18.835308 1446402 cri.go:96] found id: ""
	I1222 00:30:18.835323 1446402 logs.go:282] 0 containers: []
	W1222 00:30:18.835337 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:30:18.835344 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:30:18.835408 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:30:18.861968 1446402 cri.go:96] found id: ""
	I1222 00:30:18.861982 1446402 logs.go:282] 0 containers: []
	W1222 00:30:18.861989 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:30:18.861995 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:30:18.862052 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:30:18.887230 1446402 cri.go:96] found id: ""
	I1222 00:30:18.887243 1446402 logs.go:282] 0 containers: []
	W1222 00:30:18.887250 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:30:18.887256 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:30:18.887313 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:30:18.912928 1446402 cri.go:96] found id: ""
	I1222 00:30:18.912942 1446402 logs.go:282] 0 containers: []
	W1222 00:30:18.912949 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:30:18.912954 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:30:18.913016 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:30:18.939487 1446402 cri.go:96] found id: ""
	I1222 00:30:18.939501 1446402 logs.go:282] 0 containers: []
	W1222 00:30:18.939509 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:30:18.939514 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:30:18.939578 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:30:18.973342 1446402 cri.go:96] found id: ""
	I1222 00:30:18.973356 1446402 logs.go:282] 0 containers: []
	W1222 00:30:18.973364 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:30:18.973369 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:30:18.973428 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:30:18.997889 1446402 cri.go:96] found id: ""
	I1222 00:30:18.997913 1446402 logs.go:282] 0 containers: []
	W1222 00:30:18.997920 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:30:18.997927 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:30:18.997938 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:30:19.055572 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:30:19.055591 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:30:19.072427 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:30:19.072443 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:30:19.139616 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:30:19.130540   12061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:19.131330   12061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:19.133029   12061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:19.133650   12061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:19.135400   12061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:30:19.130540   12061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:19.131330   12061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:19.133029   12061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:19.133650   12061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:19.135400   12061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:30:19.139628 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:30:19.139638 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:30:19.202678 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:30:19.202697 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:30:21.731757 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:30:21.742262 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:30:21.742322 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:30:21.768714 1446402 cri.go:96] found id: ""
	I1222 00:30:21.768728 1446402 logs.go:282] 0 containers: []
	W1222 00:30:21.768736 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:30:21.768741 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:30:21.768804 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:30:21.799253 1446402 cri.go:96] found id: ""
	I1222 00:30:21.799269 1446402 logs.go:282] 0 containers: []
	W1222 00:30:21.799276 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:30:21.799283 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:30:21.799344 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:30:21.824941 1446402 cri.go:96] found id: ""
	I1222 00:30:21.824963 1446402 logs.go:282] 0 containers: []
	W1222 00:30:21.824970 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:30:21.824975 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:30:21.825035 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:30:21.850741 1446402 cri.go:96] found id: ""
	I1222 00:30:21.850755 1446402 logs.go:282] 0 containers: []
	W1222 00:30:21.850762 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:30:21.850767 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:30:21.850829 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:30:21.876572 1446402 cri.go:96] found id: ""
	I1222 00:30:21.876587 1446402 logs.go:282] 0 containers: []
	W1222 00:30:21.876595 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:30:21.876600 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:30:21.876660 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:30:21.902799 1446402 cri.go:96] found id: ""
	I1222 00:30:21.902814 1446402 logs.go:282] 0 containers: []
	W1222 00:30:21.902821 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:30:21.902827 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:30:21.902888 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:30:21.928559 1446402 cri.go:96] found id: ""
	I1222 00:30:21.928573 1446402 logs.go:282] 0 containers: []
	W1222 00:30:21.928580 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:30:21.928587 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:30:21.928597 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:30:21.984144 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:30:21.984164 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:30:22.000384 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:30:22.000402 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:30:22.073778 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:30:22.064391   12166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:22.065145   12166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:22.066873   12166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:22.067425   12166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:22.069099   12166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:30:22.064391   12166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:22.065145   12166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:22.066873   12166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:22.067425   12166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:22.069099   12166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:30:22.073791 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:30:22.073804 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:30:22.146346 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:30:22.146377 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:30:24.676106 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:30:24.687741 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:30:24.687862 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:30:24.714182 1446402 cri.go:96] found id: ""
	I1222 00:30:24.714204 1446402 logs.go:282] 0 containers: []
	W1222 00:30:24.714212 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:30:24.714217 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:30:24.714281 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:30:24.740930 1446402 cri.go:96] found id: ""
	I1222 00:30:24.740944 1446402 logs.go:282] 0 containers: []
	W1222 00:30:24.740951 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:30:24.740957 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:30:24.741018 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:30:24.767599 1446402 cri.go:96] found id: ""
	I1222 00:30:24.767613 1446402 logs.go:282] 0 containers: []
	W1222 00:30:24.767621 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:30:24.767626 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:30:24.767685 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:30:24.792739 1446402 cri.go:96] found id: ""
	I1222 00:30:24.792753 1446402 logs.go:282] 0 containers: []
	W1222 00:30:24.792760 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:30:24.792766 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:30:24.792827 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:30:24.816926 1446402 cri.go:96] found id: ""
	I1222 00:30:24.816940 1446402 logs.go:282] 0 containers: []
	W1222 00:30:24.816948 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:30:24.816953 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:30:24.817012 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:30:24.842765 1446402 cri.go:96] found id: ""
	I1222 00:30:24.842780 1446402 logs.go:282] 0 containers: []
	W1222 00:30:24.842788 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:30:24.842794 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:30:24.842872 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:30:24.869078 1446402 cri.go:96] found id: ""
	I1222 00:30:24.869092 1446402 logs.go:282] 0 containers: []
	W1222 00:30:24.869099 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:30:24.869108 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:30:24.869119 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:30:24.903296 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:30:24.903312 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:30:24.961056 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:30:24.961075 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:30:24.976812 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:30:24.976828 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:30:25.069840 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:30:25.045522   12282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:25.046327   12282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:25.048398   12282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:25.049159   12282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:25.062238   12282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:30:25.045522   12282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:25.046327   12282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:25.048398   12282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:25.049159   12282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:25.062238   12282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:30:25.069853 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:30:25.069866 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:30:27.636563 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:30:27.647100 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:30:27.647166 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:30:27.672723 1446402 cri.go:96] found id: ""
	I1222 00:30:27.672737 1446402 logs.go:282] 0 containers: []
	W1222 00:30:27.672745 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:30:27.672750 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:30:27.672813 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:30:27.702441 1446402 cri.go:96] found id: ""
	I1222 00:30:27.702455 1446402 logs.go:282] 0 containers: []
	W1222 00:30:27.702462 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:30:27.702468 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:30:27.702530 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:30:27.731422 1446402 cri.go:96] found id: ""
	I1222 00:30:27.731436 1446402 logs.go:282] 0 containers: []
	W1222 00:30:27.731443 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:30:27.731448 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:30:27.731509 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:30:27.756265 1446402 cri.go:96] found id: ""
	I1222 00:30:27.756279 1446402 logs.go:282] 0 containers: []
	W1222 00:30:27.756287 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:30:27.756292 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:30:27.756354 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:30:27.779774 1446402 cri.go:96] found id: ""
	I1222 00:30:27.779791 1446402 logs.go:282] 0 containers: []
	W1222 00:30:27.779798 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:30:27.779804 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:30:27.779867 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:30:27.805305 1446402 cri.go:96] found id: ""
	I1222 00:30:27.805320 1446402 logs.go:282] 0 containers: []
	W1222 00:30:27.805327 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:30:27.805333 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:30:27.805396 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:30:27.835772 1446402 cri.go:96] found id: ""
	I1222 00:30:27.835786 1446402 logs.go:282] 0 containers: []
	W1222 00:30:27.835794 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:30:27.835802 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:30:27.835813 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:30:27.851527 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:30:27.851543 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:30:27.917867 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:30:27.909398   12374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:27.909941   12374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:27.911438   12374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:27.911805   12374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:27.913341   12374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:30:27.909398   12374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:27.909941   12374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:27.911438   12374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:27.911805   12374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:27.913341   12374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:30:27.917877 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:30:27.917889 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:30:27.981255 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:30:27.981274 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:30:28.012714 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:30:28.012732 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:30:30.570668 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:30:30.581032 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:30:30.581096 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:30:30.605788 1446402 cri.go:96] found id: ""
	I1222 00:30:30.605801 1446402 logs.go:282] 0 containers: []
	W1222 00:30:30.605809 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:30:30.605816 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:30:30.605878 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:30:30.630263 1446402 cri.go:96] found id: ""
	I1222 00:30:30.630277 1446402 logs.go:282] 0 containers: []
	W1222 00:30:30.630284 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:30:30.630289 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:30:30.630348 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:30:30.655578 1446402 cri.go:96] found id: ""
	I1222 00:30:30.655593 1446402 logs.go:282] 0 containers: []
	W1222 00:30:30.655600 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:30:30.655608 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:30:30.655668 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:30:30.680304 1446402 cri.go:96] found id: ""
	I1222 00:30:30.680319 1446402 logs.go:282] 0 containers: []
	W1222 00:30:30.680326 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:30:30.680332 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:30:30.680390 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:30:30.706799 1446402 cri.go:96] found id: ""
	I1222 00:30:30.706812 1446402 logs.go:282] 0 containers: []
	W1222 00:30:30.706819 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:30:30.706826 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:30:30.706888 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:30:30.732009 1446402 cri.go:96] found id: ""
	I1222 00:30:30.732023 1446402 logs.go:282] 0 containers: []
	W1222 00:30:30.732030 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:30:30.732036 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:30:30.732145 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:30:30.758260 1446402 cri.go:96] found id: ""
	I1222 00:30:30.758274 1446402 logs.go:282] 0 containers: []
	W1222 00:30:30.758282 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:30:30.758289 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:30:30.758302 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:30:30.773937 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:30:30.773955 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:30:30.836710 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:30:30.828393   12483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:30.829211   12483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:30.830850   12483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:30.831199   12483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:30.832774   12483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:30:30.828393   12483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:30.829211   12483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:30.830850   12483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:30.831199   12483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:30.832774   12483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:30:30.836720 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:30:30.836734 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:30:30.898609 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:30:30.898629 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:30:30.926987 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:30:30.927002 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:30:33.488514 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:30:33.500859 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:30:33.500936 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:30:33.534647 1446402 cri.go:96] found id: ""
	I1222 00:30:33.534662 1446402 logs.go:282] 0 containers: []
	W1222 00:30:33.534669 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:30:33.534675 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:30:33.534740 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:30:33.567528 1446402 cri.go:96] found id: ""
	I1222 00:30:33.567542 1446402 logs.go:282] 0 containers: []
	W1222 00:30:33.567550 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:30:33.567556 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:30:33.567619 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:30:33.592756 1446402 cri.go:96] found id: ""
	I1222 00:30:33.592770 1446402 logs.go:282] 0 containers: []
	W1222 00:30:33.592777 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:30:33.592783 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:30:33.592843 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:30:33.618141 1446402 cri.go:96] found id: ""
	I1222 00:30:33.618155 1446402 logs.go:282] 0 containers: []
	W1222 00:30:33.618162 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:30:33.618169 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:30:33.618229 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:30:33.643676 1446402 cri.go:96] found id: ""
	I1222 00:30:33.643690 1446402 logs.go:282] 0 containers: []
	W1222 00:30:33.643697 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:30:33.643702 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:30:33.643766 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:30:33.675007 1446402 cri.go:96] found id: ""
	I1222 00:30:33.675022 1446402 logs.go:282] 0 containers: []
	W1222 00:30:33.675029 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:30:33.675035 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:30:33.675096 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:30:33.701088 1446402 cri.go:96] found id: ""
	I1222 00:30:33.701104 1446402 logs.go:282] 0 containers: []
	W1222 00:30:33.701112 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:30:33.701119 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:30:33.701130 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:30:33.757879 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:30:33.757898 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:30:33.773857 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:30:33.773873 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:30:33.838724 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:30:33.830022   12590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:33.830661   12590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:33.832486   12590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:33.833021   12590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:33.834672   12590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:30:33.830022   12590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:33.830661   12590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:33.832486   12590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:33.833021   12590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:33.834672   12590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:30:33.838735 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:30:33.838745 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:30:33.901316 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:30:33.901336 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:30:36.433582 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:30:36.443819 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:30:36.443881 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:30:36.467506 1446402 cri.go:96] found id: ""
	I1222 00:30:36.467521 1446402 logs.go:282] 0 containers: []
	W1222 00:30:36.467528 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:30:36.467534 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:30:36.467596 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:30:36.502511 1446402 cri.go:96] found id: ""
	I1222 00:30:36.502525 1446402 logs.go:282] 0 containers: []
	W1222 00:30:36.502532 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:30:36.502538 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:30:36.502596 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:30:36.528768 1446402 cri.go:96] found id: ""
	I1222 00:30:36.528782 1446402 logs.go:282] 0 containers: []
	W1222 00:30:36.528789 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:30:36.528795 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:30:36.528856 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:30:36.563520 1446402 cri.go:96] found id: ""
	I1222 00:30:36.563534 1446402 logs.go:282] 0 containers: []
	W1222 00:30:36.563552 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:30:36.563558 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:30:36.563625 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:30:36.587776 1446402 cri.go:96] found id: ""
	I1222 00:30:36.587791 1446402 logs.go:282] 0 containers: []
	W1222 00:30:36.587798 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:30:36.587803 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:30:36.587870 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:30:36.613760 1446402 cri.go:96] found id: ""
	I1222 00:30:36.613774 1446402 logs.go:282] 0 containers: []
	W1222 00:30:36.613781 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:30:36.613786 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:30:36.613846 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:30:36.638515 1446402 cri.go:96] found id: ""
	I1222 00:30:36.638529 1446402 logs.go:282] 0 containers: []
	W1222 00:30:36.638536 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:30:36.638544 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:30:36.638554 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:30:36.697219 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:30:36.697239 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:30:36.713436 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:30:36.713452 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:30:36.780368 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:30:36.772229   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:36.773020   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:36.774640   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:36.774973   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:36.776479   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:30:36.772229   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:36.773020   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:36.774640   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:36.774973   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:36.776479   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:30:36.780381 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:30:36.780393 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:30:36.842888 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:30:36.842908 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:30:39.372135 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:30:39.382719 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:30:39.382781 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:30:39.408981 1446402 cri.go:96] found id: ""
	I1222 00:30:39.408994 1446402 logs.go:282] 0 containers: []
	W1222 00:30:39.409002 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:30:39.409007 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:30:39.409066 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:30:39.442559 1446402 cri.go:96] found id: ""
	I1222 00:30:39.442573 1446402 logs.go:282] 0 containers: []
	W1222 00:30:39.442581 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:30:39.442586 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:30:39.442643 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:30:39.468577 1446402 cri.go:96] found id: ""
	I1222 00:30:39.468591 1446402 logs.go:282] 0 containers: []
	W1222 00:30:39.468598 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:30:39.468603 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:30:39.468660 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:30:39.510316 1446402 cri.go:96] found id: ""
	I1222 00:30:39.510331 1446402 logs.go:282] 0 containers: []
	W1222 00:30:39.510339 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:30:39.510345 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:30:39.510407 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:30:39.540511 1446402 cri.go:96] found id: ""
	I1222 00:30:39.540526 1446402 logs.go:282] 0 containers: []
	W1222 00:30:39.540538 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:30:39.540544 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:30:39.540607 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:30:39.567225 1446402 cri.go:96] found id: ""
	I1222 00:30:39.567239 1446402 logs.go:282] 0 containers: []
	W1222 00:30:39.567246 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:30:39.567251 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:30:39.567313 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:30:39.592091 1446402 cri.go:96] found id: ""
	I1222 00:30:39.592105 1446402 logs.go:282] 0 containers: []
	W1222 00:30:39.592112 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:30:39.592119 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:30:39.592130 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:30:39.622343 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:30:39.622362 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:30:39.679425 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:30:39.679444 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:30:39.696213 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:30:39.696230 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:30:39.769659 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:30:39.760868   12812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:39.761671   12812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:39.763320   12812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:39.763916   12812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:39.765513   12812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:30:39.760868   12812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:39.761671   12812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:39.763320   12812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:39.763916   12812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:39.765513   12812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:30:39.769670 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:30:39.769680 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:30:42.336173 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:30:42.346558 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:30:42.346621 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:30:42.370787 1446402 cri.go:96] found id: ""
	I1222 00:30:42.370802 1446402 logs.go:282] 0 containers: []
	W1222 00:30:42.370810 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:30:42.370816 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:30:42.370877 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:30:42.395960 1446402 cri.go:96] found id: ""
	I1222 00:30:42.395973 1446402 logs.go:282] 0 containers: []
	W1222 00:30:42.395980 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:30:42.395985 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:30:42.396044 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:30:42.421477 1446402 cri.go:96] found id: ""
	I1222 00:30:42.421491 1446402 logs.go:282] 0 containers: []
	W1222 00:30:42.421498 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:30:42.421504 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:30:42.421564 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:30:42.446555 1446402 cri.go:96] found id: ""
	I1222 00:30:42.446569 1446402 logs.go:282] 0 containers: []
	W1222 00:30:42.446577 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:30:42.446582 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:30:42.446642 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:30:42.472081 1446402 cri.go:96] found id: ""
	I1222 00:30:42.472098 1446402 logs.go:282] 0 containers: []
	W1222 00:30:42.472105 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:30:42.472110 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:30:42.472169 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:30:42.511362 1446402 cri.go:96] found id: ""
	I1222 00:30:42.511375 1446402 logs.go:282] 0 containers: []
	W1222 00:30:42.511382 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:30:42.511388 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:30:42.511447 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:30:42.547512 1446402 cri.go:96] found id: ""
	I1222 00:30:42.547527 1446402 logs.go:282] 0 containers: []
	W1222 00:30:42.547533 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:30:42.547541 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:30:42.547551 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:30:42.615776 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:30:42.615799 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:30:42.646130 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:30:42.646146 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:30:42.705658 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:30:42.705677 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:30:42.721590 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:30:42.721610 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:30:42.787813 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:30:42.778324   12913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:42.779074   12913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:42.780775   12913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:42.781111   12913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:42.782752   12913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:30:42.778324   12913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:42.779074   12913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:42.780775   12913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:42.781111   12913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:42.782752   12913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:30:45.288531 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:30:45.303331 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:30:45.303401 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:30:45.338450 1446402 cri.go:96] found id: ""
	I1222 00:30:45.338484 1446402 logs.go:282] 0 containers: []
	W1222 00:30:45.338492 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:30:45.338499 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:30:45.338571 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:30:45.365473 1446402 cri.go:96] found id: ""
	I1222 00:30:45.365487 1446402 logs.go:282] 0 containers: []
	W1222 00:30:45.365494 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:30:45.365500 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:30:45.365561 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:30:45.390271 1446402 cri.go:96] found id: ""
	I1222 00:30:45.390285 1446402 logs.go:282] 0 containers: []
	W1222 00:30:45.390292 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:30:45.390298 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:30:45.390357 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:30:45.414377 1446402 cri.go:96] found id: ""
	I1222 00:30:45.414391 1446402 logs.go:282] 0 containers: []
	W1222 00:30:45.414398 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:30:45.414405 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:30:45.414465 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:30:45.443708 1446402 cri.go:96] found id: ""
	I1222 00:30:45.443722 1446402 logs.go:282] 0 containers: []
	W1222 00:30:45.443729 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:30:45.443735 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:30:45.443800 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:30:45.469111 1446402 cri.go:96] found id: ""
	I1222 00:30:45.469126 1446402 logs.go:282] 0 containers: []
	W1222 00:30:45.469133 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:30:45.469138 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:30:45.469199 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:30:45.506648 1446402 cri.go:96] found id: ""
	I1222 00:30:45.506662 1446402 logs.go:282] 0 containers: []
	W1222 00:30:45.506670 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:30:45.506678 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:30:45.506688 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:30:45.570224 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:30:45.570244 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:30:45.587665 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:30:45.587682 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:30:45.658642 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:30:45.649729   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:45.650417   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:45.652145   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:45.652819   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:45.654615   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:30:45.649729   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:45.650417   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:45.652145   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:45.652819   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:45.654615   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:30:45.658668 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:30:45.658680 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:30:45.726278 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:30:45.726296 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:30:48.258377 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:30:48.269041 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:30:48.269106 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:30:48.296090 1446402 cri.go:96] found id: ""
	I1222 00:30:48.296110 1446402 logs.go:282] 0 containers: []
	W1222 00:30:48.296118 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:30:48.296124 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:30:48.296189 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:30:48.324810 1446402 cri.go:96] found id: ""
	I1222 00:30:48.324824 1446402 logs.go:282] 0 containers: []
	W1222 00:30:48.324838 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:30:48.324844 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:30:48.324907 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:30:48.355386 1446402 cri.go:96] found id: ""
	I1222 00:30:48.355401 1446402 logs.go:282] 0 containers: []
	W1222 00:30:48.355408 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:30:48.355413 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:30:48.355478 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:30:48.382715 1446402 cri.go:96] found id: ""
	I1222 00:30:48.382738 1446402 logs.go:282] 0 containers: []
	W1222 00:30:48.382746 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:30:48.382752 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:30:48.382829 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:30:48.408554 1446402 cri.go:96] found id: ""
	I1222 00:30:48.408567 1446402 logs.go:282] 0 containers: []
	W1222 00:30:48.408574 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:30:48.408580 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:30:48.408643 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:30:48.434270 1446402 cri.go:96] found id: ""
	I1222 00:30:48.434293 1446402 logs.go:282] 0 containers: []
	W1222 00:30:48.434300 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:30:48.434306 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:30:48.434374 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:30:48.459881 1446402 cri.go:96] found id: ""
	I1222 00:30:48.459895 1446402 logs.go:282] 0 containers: []
	W1222 00:30:48.459903 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:30:48.459911 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:30:48.459921 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:30:48.517466 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:30:48.517484 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:30:48.537053 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:30:48.537070 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:30:48.604854 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:30:48.595608   13115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:48.596514   13115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:48.598305   13115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:48.599000   13115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:48.600577   13115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:30:48.595608   13115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:48.596514   13115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:48.598305   13115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:48.599000   13115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:48.600577   13115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:30:48.604864 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:30:48.604874 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:30:48.671361 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:30:48.671387 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:30:51.200853 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:30:51.211776 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:30:51.211839 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:30:51.238170 1446402 cri.go:96] found id: ""
	I1222 00:30:51.238186 1446402 logs.go:282] 0 containers: []
	W1222 00:30:51.238194 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:30:51.238199 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:30:51.238268 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:30:51.269105 1446402 cri.go:96] found id: ""
	I1222 00:30:51.269134 1446402 logs.go:282] 0 containers: []
	W1222 00:30:51.269142 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:30:51.269148 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:30:51.269219 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:30:51.293434 1446402 cri.go:96] found id: ""
	I1222 00:30:51.293457 1446402 logs.go:282] 0 containers: []
	W1222 00:30:51.293464 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:30:51.293470 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:30:51.293541 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:30:51.319040 1446402 cri.go:96] found id: ""
	I1222 00:30:51.319055 1446402 logs.go:282] 0 containers: []
	W1222 00:30:51.319062 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:30:51.319068 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:30:51.319130 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:30:51.348957 1446402 cri.go:96] found id: ""
	I1222 00:30:51.348974 1446402 logs.go:282] 0 containers: []
	W1222 00:30:51.348982 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:30:51.348987 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:30:51.349051 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:30:51.374220 1446402 cri.go:96] found id: ""
	I1222 00:30:51.374234 1446402 logs.go:282] 0 containers: []
	W1222 00:30:51.374242 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:30:51.374248 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:30:51.374308 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:30:51.399159 1446402 cri.go:96] found id: ""
	I1222 00:30:51.399173 1446402 logs.go:282] 0 containers: []
	W1222 00:30:51.399180 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:30:51.399188 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:30:51.399198 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:30:51.459029 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:30:51.459048 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:30:51.475298 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:30:51.475315 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:30:51.566963 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:30:51.557344   13215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:51.558248   13215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:51.560695   13215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:51.561017   13215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:51.562619   13215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:30:51.557344   13215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:51.558248   13215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:51.560695   13215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:51.561017   13215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:51.562619   13215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:30:51.566987 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:30:51.566997 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:30:51.629274 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:30:51.629295 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:30:54.157280 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:30:54.168037 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:30:54.168148 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:30:54.193307 1446402 cri.go:96] found id: ""
	I1222 00:30:54.193321 1446402 logs.go:282] 0 containers: []
	W1222 00:30:54.193328 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:30:54.193333 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:30:54.193396 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:30:54.219101 1446402 cri.go:96] found id: ""
	I1222 00:30:54.219115 1446402 logs.go:282] 0 containers: []
	W1222 00:30:54.219123 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:30:54.219128 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:30:54.219194 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:30:54.246374 1446402 cri.go:96] found id: ""
	I1222 00:30:54.246389 1446402 logs.go:282] 0 containers: []
	W1222 00:30:54.246396 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:30:54.246407 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:30:54.246465 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:30:54.271786 1446402 cri.go:96] found id: ""
	I1222 00:30:54.271801 1446402 logs.go:282] 0 containers: []
	W1222 00:30:54.271808 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:30:54.271813 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:30:54.271879 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:30:54.297101 1446402 cri.go:96] found id: ""
	I1222 00:30:54.297116 1446402 logs.go:282] 0 containers: []
	W1222 00:30:54.297123 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:30:54.297128 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:30:54.297187 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:30:54.321971 1446402 cri.go:96] found id: ""
	I1222 00:30:54.321984 1446402 logs.go:282] 0 containers: []
	W1222 00:30:54.321991 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:30:54.321997 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:30:54.322057 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:30:54.347313 1446402 cri.go:96] found id: ""
	I1222 00:30:54.347327 1446402 logs.go:282] 0 containers: []
	W1222 00:30:54.347334 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:30:54.347342 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:30:54.347353 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:30:54.403888 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:30:54.403909 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:30:54.419766 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:30:54.419782 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:30:54.484682 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:30:54.476870   13317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:54.477516   13317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:54.479088   13317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:54.479419   13317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:54.480904   13317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:30:54.476870   13317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:54.477516   13317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:54.479088   13317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:54.479419   13317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:54.480904   13317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:30:54.484693 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:30:54.484703 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:30:54.552360 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:30:54.552378 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:30:57.081711 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:30:57.092202 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:30:57.092266 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:30:57.117391 1446402 cri.go:96] found id: ""
	I1222 00:30:57.117405 1446402 logs.go:282] 0 containers: []
	W1222 00:30:57.117412 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:30:57.117419 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:30:57.117479 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:30:57.143247 1446402 cri.go:96] found id: ""
	I1222 00:30:57.143261 1446402 logs.go:282] 0 containers: []
	W1222 00:30:57.143269 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:30:57.143274 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:30:57.143336 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:30:57.167819 1446402 cri.go:96] found id: ""
	I1222 00:30:57.167833 1446402 logs.go:282] 0 containers: []
	W1222 00:30:57.167840 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:30:57.167845 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:30:57.167907 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:30:57.199021 1446402 cri.go:96] found id: ""
	I1222 00:30:57.199036 1446402 logs.go:282] 0 containers: []
	W1222 00:30:57.199043 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:30:57.199049 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:30:57.199108 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:30:57.222971 1446402 cri.go:96] found id: ""
	I1222 00:30:57.222986 1446402 logs.go:282] 0 containers: []
	W1222 00:30:57.222993 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:30:57.222999 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:30:57.223058 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:30:57.248778 1446402 cri.go:96] found id: ""
	I1222 00:30:57.248792 1446402 logs.go:282] 0 containers: []
	W1222 00:30:57.248800 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:30:57.248806 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:30:57.248865 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:30:57.274281 1446402 cri.go:96] found id: ""
	I1222 00:30:57.274294 1446402 logs.go:282] 0 containers: []
	W1222 00:30:57.274301 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:30:57.274309 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:30:57.274319 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:30:57.336861 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:30:57.336882 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:30:57.365636 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:30:57.365661 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:30:57.423967 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:30:57.423989 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:30:57.440127 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:30:57.440145 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:30:57.509798 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:30:57.500225   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:57.501062   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:57.503871   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:57.504255   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:57.505775   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:30:57.500225   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:57.501062   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:57.503871   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:57.504255   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:57.505775   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:31:00.010205 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:31:00.104650 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:31:00.104734 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:31:00.179982 1446402 cri.go:96] found id: ""
	I1222 00:31:00.180032 1446402 logs.go:282] 0 containers: []
	W1222 00:31:00.180041 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:31:00.180071 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:31:00.180239 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:31:00.284701 1446402 cri.go:96] found id: ""
	I1222 00:31:00.284717 1446402 logs.go:282] 0 containers: []
	W1222 00:31:00.284725 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:31:00.284731 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:31:00.284803 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:31:00.386635 1446402 cri.go:96] found id: ""
	I1222 00:31:00.386652 1446402 logs.go:282] 0 containers: []
	W1222 00:31:00.386659 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:31:00.386665 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:31:00.386735 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:31:00.427920 1446402 cri.go:96] found id: ""
	I1222 00:31:00.427944 1446402 logs.go:282] 0 containers: []
	W1222 00:31:00.427959 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:31:00.427966 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:31:00.428040 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:31:00.465116 1446402 cri.go:96] found id: ""
	I1222 00:31:00.465134 1446402 logs.go:282] 0 containers: []
	W1222 00:31:00.465144 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:31:00.465151 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:31:00.465232 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:31:00.499645 1446402 cri.go:96] found id: ""
	I1222 00:31:00.499660 1446402 logs.go:282] 0 containers: []
	W1222 00:31:00.499667 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:31:00.499673 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:31:00.499747 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:31:00.537565 1446402 cri.go:96] found id: ""
	I1222 00:31:00.537582 1446402 logs.go:282] 0 containers: []
	W1222 00:31:00.537595 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:31:00.537604 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:31:00.537615 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:31:00.575552 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:31:00.575567 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:31:00.633041 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:31:00.633063 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:31:00.649172 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:31:00.649187 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:31:00.724351 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:31:00.712002   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:00.712620   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:00.717220   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:00.718158   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:00.720021   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:31:00.712002   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:00.712620   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:00.717220   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:00.718158   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:00.720021   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:31:00.724361 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:31:00.724372 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:31:03.287306 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:31:03.298001 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:31:03.298072 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:31:03.324825 1446402 cri.go:96] found id: ""
	I1222 00:31:03.324840 1446402 logs.go:282] 0 containers: []
	W1222 00:31:03.324847 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:31:03.324859 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:31:03.324922 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:31:03.350917 1446402 cri.go:96] found id: ""
	I1222 00:31:03.350931 1446402 logs.go:282] 0 containers: []
	W1222 00:31:03.350939 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:31:03.350944 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:31:03.351006 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:31:03.379670 1446402 cri.go:96] found id: ""
	I1222 00:31:03.379685 1446402 logs.go:282] 0 containers: []
	W1222 00:31:03.379692 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:31:03.379697 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:31:03.379757 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:31:03.404478 1446402 cri.go:96] found id: ""
	I1222 00:31:03.404492 1446402 logs.go:282] 0 containers: []
	W1222 00:31:03.404499 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:31:03.404505 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:31:03.404566 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:31:03.433469 1446402 cri.go:96] found id: ""
	I1222 00:31:03.433483 1446402 logs.go:282] 0 containers: []
	W1222 00:31:03.433491 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:31:03.433496 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:31:03.433559 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:31:03.458710 1446402 cri.go:96] found id: ""
	I1222 00:31:03.458724 1446402 logs.go:282] 0 containers: []
	W1222 00:31:03.458731 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:31:03.458737 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:31:03.458798 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:31:03.489628 1446402 cri.go:96] found id: ""
	I1222 00:31:03.489641 1446402 logs.go:282] 0 containers: []
	W1222 00:31:03.489648 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:31:03.489656 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:31:03.489666 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:31:03.561791 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:31:03.561811 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:31:03.591660 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:31:03.591676 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:31:03.649546 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:31:03.649564 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:31:03.665699 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:31:03.665717 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:31:03.732939 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:31:03.723537   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:03.725016   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:03.725617   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:03.726715   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:03.727036   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:31:03.723537   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:03.725016   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:03.725617   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:03.726715   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:03.727036   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:31:06.234625 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:31:06.245401 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:31:06.245464 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:31:06.272079 1446402 cri.go:96] found id: ""
	I1222 00:31:06.272093 1446402 logs.go:282] 0 containers: []
	W1222 00:31:06.272100 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:31:06.272105 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:31:06.272166 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:31:06.297857 1446402 cri.go:96] found id: ""
	I1222 00:31:06.297871 1446402 logs.go:282] 0 containers: []
	W1222 00:31:06.297881 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:31:06.297886 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:31:06.297947 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:31:06.323563 1446402 cri.go:96] found id: ""
	I1222 00:31:06.323578 1446402 logs.go:282] 0 containers: []
	W1222 00:31:06.323585 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:31:06.323591 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:31:06.323654 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:31:06.352113 1446402 cri.go:96] found id: ""
	I1222 00:31:06.352128 1446402 logs.go:282] 0 containers: []
	W1222 00:31:06.352135 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:31:06.352140 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:31:06.352201 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:31:06.383883 1446402 cri.go:96] found id: ""
	I1222 00:31:06.383897 1446402 logs.go:282] 0 containers: []
	W1222 00:31:06.383906 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:31:06.383911 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:31:06.383980 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:31:06.410293 1446402 cri.go:96] found id: ""
	I1222 00:31:06.410307 1446402 logs.go:282] 0 containers: []
	W1222 00:31:06.410314 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:31:06.410319 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:31:06.410379 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:31:06.436428 1446402 cri.go:96] found id: ""
	I1222 00:31:06.436442 1446402 logs.go:282] 0 containers: []
	W1222 00:31:06.436449 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:31:06.436457 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:31:06.436467 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:31:06.493371 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:31:06.493391 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:31:06.511382 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:31:06.511400 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:31:06.582246 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:31:06.573607   13739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:06.574274   13739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:06.576164   13739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:06.576589   13739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:06.578115   13739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:31:06.573607   13739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:06.574274   13739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:06.576164   13739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:06.576589   13739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:06.578115   13739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:31:06.582256 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:31:06.582266 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:31:06.644909 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:31:06.644931 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:31:09.176116 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:31:09.186886 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:31:09.186957 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:31:09.212045 1446402 cri.go:96] found id: ""
	I1222 00:31:09.212081 1446402 logs.go:282] 0 containers: []
	W1222 00:31:09.212088 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:31:09.212094 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:31:09.212169 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:31:09.237345 1446402 cri.go:96] found id: ""
	I1222 00:31:09.237360 1446402 logs.go:282] 0 containers: []
	W1222 00:31:09.237367 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:31:09.237373 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:31:09.237435 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:31:09.262938 1446402 cri.go:96] found id: ""
	I1222 00:31:09.262953 1446402 logs.go:282] 0 containers: []
	W1222 00:31:09.262960 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:31:09.262966 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:31:09.263027 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:31:09.288202 1446402 cri.go:96] found id: ""
	I1222 00:31:09.288216 1446402 logs.go:282] 0 containers: []
	W1222 00:31:09.288223 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:31:09.288228 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:31:09.288291 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:31:09.313061 1446402 cri.go:96] found id: ""
	I1222 00:31:09.313075 1446402 logs.go:282] 0 containers: []
	W1222 00:31:09.313083 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:31:09.313088 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:31:09.313151 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:31:09.342668 1446402 cri.go:96] found id: ""
	I1222 00:31:09.342683 1446402 logs.go:282] 0 containers: []
	W1222 00:31:09.342691 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:31:09.342696 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:31:09.342760 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:31:09.370215 1446402 cri.go:96] found id: ""
	I1222 00:31:09.370239 1446402 logs.go:282] 0 containers: []
	W1222 00:31:09.370249 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:31:09.370258 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:31:09.370270 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:31:09.433823 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:31:09.425161   13835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:09.425608   13835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:09.427450   13835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:09.428117   13835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:09.429893   13835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:31:09.425161   13835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:09.425608   13835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:09.427450   13835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:09.428117   13835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:09.429893   13835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:31:09.433834 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:31:09.433846 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:31:09.496002 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:31:09.496024 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:31:09.538432 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:31:09.538457 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:31:09.599912 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:31:09.599933 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:31:12.117068 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:31:12.128268 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:31:12.128331 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:31:12.154851 1446402 cri.go:96] found id: ""
	I1222 00:31:12.154865 1446402 logs.go:282] 0 containers: []
	W1222 00:31:12.154873 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:31:12.154878 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:31:12.154961 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:31:12.180838 1446402 cri.go:96] found id: ""
	I1222 00:31:12.180852 1446402 logs.go:282] 0 containers: []
	W1222 00:31:12.180860 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:31:12.180865 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:31:12.180927 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:31:12.205653 1446402 cri.go:96] found id: ""
	I1222 00:31:12.205667 1446402 logs.go:282] 0 containers: []
	W1222 00:31:12.205683 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:31:12.205689 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:31:12.205760 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:31:12.232339 1446402 cri.go:96] found id: ""
	I1222 00:31:12.232352 1446402 logs.go:282] 0 containers: []
	W1222 00:31:12.232360 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:31:12.232365 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:31:12.232425 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:31:12.257997 1446402 cri.go:96] found id: ""
	I1222 00:31:12.258013 1446402 logs.go:282] 0 containers: []
	W1222 00:31:12.258020 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:31:12.258026 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:31:12.258113 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:31:12.282449 1446402 cri.go:96] found id: ""
	I1222 00:31:12.282464 1446402 logs.go:282] 0 containers: []
	W1222 00:31:12.282472 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:31:12.282478 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:31:12.282548 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:31:12.308351 1446402 cri.go:96] found id: ""
	I1222 00:31:12.308365 1446402 logs.go:282] 0 containers: []
	W1222 00:31:12.308372 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:31:12.308380 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:31:12.308391 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:31:12.365268 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:31:12.365286 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:31:12.381163 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:31:12.381180 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:31:12.448592 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:31:12.440189   13948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:12.440992   13948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:12.442650   13948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:12.443125   13948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:12.444728   13948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:31:12.440189   13948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:12.440992   13948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:12.442650   13948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:12.443125   13948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:12.444728   13948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:31:12.448603 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:31:12.448614 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:31:12.512421 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:31:12.512440 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:31:15.042734 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:31:15.076968 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:31:15.077038 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:31:15.105454 1446402 cri.go:96] found id: ""
	I1222 00:31:15.105469 1446402 logs.go:282] 0 containers: []
	W1222 00:31:15.105477 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:31:15.105484 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:31:15.105548 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:31:15.133491 1446402 cri.go:96] found id: ""
	I1222 00:31:15.133517 1446402 logs.go:282] 0 containers: []
	W1222 00:31:15.133525 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:31:15.133531 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:31:15.133610 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:31:15.161141 1446402 cri.go:96] found id: ""
	I1222 00:31:15.161155 1446402 logs.go:282] 0 containers: []
	W1222 00:31:15.161162 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:31:15.161168 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:31:15.161243 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:31:15.189035 1446402 cri.go:96] found id: ""
	I1222 00:31:15.189062 1446402 logs.go:282] 0 containers: []
	W1222 00:31:15.189071 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:31:15.189077 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:31:15.189153 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:31:15.215453 1446402 cri.go:96] found id: ""
	I1222 00:31:15.215467 1446402 logs.go:282] 0 containers: []
	W1222 00:31:15.215474 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:31:15.215479 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:31:15.215542 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:31:15.241518 1446402 cri.go:96] found id: ""
	I1222 00:31:15.241542 1446402 logs.go:282] 0 containers: []
	W1222 00:31:15.241550 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:31:15.241556 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:31:15.241627 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:31:15.270847 1446402 cri.go:96] found id: ""
	I1222 00:31:15.270862 1446402 logs.go:282] 0 containers: []
	W1222 00:31:15.270878 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:31:15.270886 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:31:15.270896 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:31:15.329892 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:31:15.329919 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:31:15.345769 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:31:15.345787 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:31:15.412686 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:31:15.403867   14050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:15.404399   14050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:15.406057   14050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:15.406571   14050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:15.408071   14050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:31:15.403867   14050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:15.404399   14050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:15.406057   14050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:15.406571   14050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:15.408071   14050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:31:15.412697 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:31:15.412708 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:31:15.475513 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:31:15.475533 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:31:18.013729 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:31:18.025498 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:31:18.025570 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:31:18.051451 1446402 cri.go:96] found id: ""
	I1222 00:31:18.051466 1446402 logs.go:282] 0 containers: []
	W1222 00:31:18.051473 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:31:18.051478 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:31:18.051540 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:31:18.078412 1446402 cri.go:96] found id: ""
	I1222 00:31:18.078428 1446402 logs.go:282] 0 containers: []
	W1222 00:31:18.078436 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:31:18.078442 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:31:18.078511 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:31:18.105039 1446402 cri.go:96] found id: ""
	I1222 00:31:18.105054 1446402 logs.go:282] 0 containers: []
	W1222 00:31:18.105062 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:31:18.105067 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:31:18.105129 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:31:18.132285 1446402 cri.go:96] found id: ""
	I1222 00:31:18.132300 1446402 logs.go:282] 0 containers: []
	W1222 00:31:18.132308 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:31:18.132314 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:31:18.132379 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:31:18.160762 1446402 cri.go:96] found id: ""
	I1222 00:31:18.160781 1446402 logs.go:282] 0 containers: []
	W1222 00:31:18.160788 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:31:18.160794 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:31:18.160855 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:31:18.187281 1446402 cri.go:96] found id: ""
	I1222 00:31:18.187295 1446402 logs.go:282] 0 containers: []
	W1222 00:31:18.187303 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:31:18.187308 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:31:18.187369 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:31:18.214033 1446402 cri.go:96] found id: ""
	I1222 00:31:18.214048 1446402 logs.go:282] 0 containers: []
	W1222 00:31:18.214055 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:31:18.214062 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:31:18.214072 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:31:18.274937 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:31:18.274957 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:31:18.291496 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:31:18.291514 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:31:18.356830 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:31:18.348577   14155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:18.349325   14155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:18.351002   14155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:18.351500   14155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:18.353025   14155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:31:18.348577   14155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:18.349325   14155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:18.351002   14155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:18.351500   14155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:18.353025   14155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:31:18.356841 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:31:18.356851 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:31:18.420006 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:31:18.420026 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:31:20.955836 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:31:20.966430 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:31:20.966499 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:31:20.992202 1446402 cri.go:96] found id: ""
	I1222 00:31:20.992216 1446402 logs.go:282] 0 containers: []
	W1222 00:31:20.992223 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:31:20.992229 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:31:20.992292 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:31:21.020435 1446402 cri.go:96] found id: ""
	I1222 00:31:21.020449 1446402 logs.go:282] 0 containers: []
	W1222 00:31:21.020456 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:31:21.020462 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:31:21.020525 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:31:21.045920 1446402 cri.go:96] found id: ""
	I1222 00:31:21.045934 1446402 logs.go:282] 0 containers: []
	W1222 00:31:21.045940 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:31:21.045945 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:31:21.046007 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:31:21.069898 1446402 cri.go:96] found id: ""
	I1222 00:31:21.069912 1446402 logs.go:282] 0 containers: []
	W1222 00:31:21.069920 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:31:21.069926 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:31:21.069986 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:31:21.096061 1446402 cri.go:96] found id: ""
	I1222 00:31:21.096075 1446402 logs.go:282] 0 containers: []
	W1222 00:31:21.096082 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:31:21.096088 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:31:21.096152 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:31:21.121380 1446402 cri.go:96] found id: ""
	I1222 00:31:21.121394 1446402 logs.go:282] 0 containers: []
	W1222 00:31:21.121401 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:31:21.121407 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:31:21.121473 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:31:21.147060 1446402 cri.go:96] found id: ""
	I1222 00:31:21.147083 1446402 logs.go:282] 0 containers: []
	W1222 00:31:21.147091 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:31:21.147098 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:31:21.147110 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:31:21.163066 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:31:21.163085 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:31:21.229457 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:31:21.220665   14257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:21.221438   14257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:21.223287   14257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:21.223780   14257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:21.225557   14257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:31:21.220665   14257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:21.221438   14257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:21.223287   14257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:21.223780   14257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:21.225557   14257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:31:21.229467 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:31:21.229482 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:31:21.296323 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:31:21.296342 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:31:21.329392 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:31:21.329409 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:31:23.886587 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:31:23.896889 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:31:23.896949 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:31:23.921855 1446402 cri.go:96] found id: ""
	I1222 00:31:23.921870 1446402 logs.go:282] 0 containers: []
	W1222 00:31:23.921878 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:31:23.921883 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:31:23.921943 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:31:23.947445 1446402 cri.go:96] found id: ""
	I1222 00:31:23.947459 1446402 logs.go:282] 0 containers: []
	W1222 00:31:23.947466 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:31:23.947471 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:31:23.947532 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:31:23.973150 1446402 cri.go:96] found id: ""
	I1222 00:31:23.973164 1446402 logs.go:282] 0 containers: []
	W1222 00:31:23.973171 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:31:23.973176 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:31:23.973236 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:31:24.000119 1446402 cri.go:96] found id: ""
	I1222 00:31:24.000133 1446402 logs.go:282] 0 containers: []
	W1222 00:31:24.000140 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:31:24.000145 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:31:24.000208 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:31:24.028319 1446402 cri.go:96] found id: ""
	I1222 00:31:24.028333 1446402 logs.go:282] 0 containers: []
	W1222 00:31:24.028341 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:31:24.028346 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:31:24.028416 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:31:24.054514 1446402 cri.go:96] found id: ""
	I1222 00:31:24.054528 1446402 logs.go:282] 0 containers: []
	W1222 00:31:24.054536 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:31:24.054541 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:31:24.054623 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:31:24.079783 1446402 cri.go:96] found id: ""
	I1222 00:31:24.079796 1446402 logs.go:282] 0 containers: []
	W1222 00:31:24.079804 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:31:24.079812 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:31:24.079823 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:31:24.136543 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:31:24.136563 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:31:24.152385 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:31:24.152402 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:31:24.219394 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:31:24.210731   14362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:24.211515   14362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:24.213293   14362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:24.213872   14362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:24.215420   14362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:31:24.210731   14362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:24.211515   14362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:24.213293   14362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:24.213872   14362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:24.215420   14362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:31:24.219403 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:31:24.219413 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:31:24.282313 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:31:24.282331 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:31:26.811961 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:31:26.822374 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:31:26.822443 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:31:26.851730 1446402 cri.go:96] found id: ""
	I1222 00:31:26.851745 1446402 logs.go:282] 0 containers: []
	W1222 00:31:26.851753 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:31:26.851758 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:31:26.851820 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:31:26.876518 1446402 cri.go:96] found id: ""
	I1222 00:31:26.876533 1446402 logs.go:282] 0 containers: []
	W1222 00:31:26.876540 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:31:26.876545 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:31:26.876614 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:31:26.906243 1446402 cri.go:96] found id: ""
	I1222 00:31:26.906258 1446402 logs.go:282] 0 containers: []
	W1222 00:31:26.906265 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:31:26.906271 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:31:26.906332 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:31:26.933029 1446402 cri.go:96] found id: ""
	I1222 00:31:26.933043 1446402 logs.go:282] 0 containers: []
	W1222 00:31:26.933050 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:31:26.933056 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:31:26.933124 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:31:26.962389 1446402 cri.go:96] found id: ""
	I1222 00:31:26.962404 1446402 logs.go:282] 0 containers: []
	W1222 00:31:26.962411 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:31:26.962417 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:31:26.962478 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:31:26.986566 1446402 cri.go:96] found id: ""
	I1222 00:31:26.986579 1446402 logs.go:282] 0 containers: []
	W1222 00:31:26.986587 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:31:26.986593 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:31:26.986654 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:31:27.013857 1446402 cri.go:96] found id: ""
	I1222 00:31:27.013872 1446402 logs.go:282] 0 containers: []
	W1222 00:31:27.013885 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:31:27.013896 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:31:27.013907 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:31:27.072155 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:31:27.072174 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:31:27.088000 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:31:27.088018 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:31:27.155219 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:31:27.146314   14467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:27.147060   14467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:27.148896   14467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:27.149539   14467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:27.151255   14467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:31:27.146314   14467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:27.147060   14467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:27.148896   14467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:27.149539   14467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:27.151255   14467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:31:27.155229 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:31:27.155240 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:31:27.220122 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:31:27.220142 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:31:29.756602 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:31:29.767503 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:31:29.767576 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:31:29.796758 1446402 cri.go:96] found id: ""
	I1222 00:31:29.796773 1446402 logs.go:282] 0 containers: []
	W1222 00:31:29.796781 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:31:29.796786 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:31:29.796848 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:31:29.826111 1446402 cri.go:96] found id: ""
	I1222 00:31:29.826125 1446402 logs.go:282] 0 containers: []
	W1222 00:31:29.826133 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:31:29.826138 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:31:29.826199 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:31:29.851803 1446402 cri.go:96] found id: ""
	I1222 00:31:29.851817 1446402 logs.go:282] 0 containers: []
	W1222 00:31:29.851827 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:31:29.851833 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:31:29.851893 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:31:29.877952 1446402 cri.go:96] found id: ""
	I1222 00:31:29.877966 1446402 logs.go:282] 0 containers: []
	W1222 00:31:29.877973 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:31:29.877979 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:31:29.878041 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:31:29.902393 1446402 cri.go:96] found id: ""
	I1222 00:31:29.902406 1446402 logs.go:282] 0 containers: []
	W1222 00:31:29.902414 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:31:29.902419 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:31:29.902499 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:31:29.930875 1446402 cri.go:96] found id: ""
	I1222 00:31:29.930889 1446402 logs.go:282] 0 containers: []
	W1222 00:31:29.930896 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:31:29.930901 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:31:29.930961 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:31:29.954467 1446402 cri.go:96] found id: ""
	I1222 00:31:29.954481 1446402 logs.go:282] 0 containers: []
	W1222 00:31:29.954488 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:31:29.954496 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:31:29.954506 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:31:30.022300 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:31:30.022322 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:31:30.101450 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:31:30.101468 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:31:30.160615 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:31:30.160637 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:31:30.177543 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:31:30.177570 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:31:30.250821 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:31:30.242174   14583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:30.242862   14583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:30.243862   14583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:30.244583   14583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:30.246440   14583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:31:30.242174   14583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:30.242862   14583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:30.243862   14583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:30.244583   14583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:30.246440   14583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:31:32.751739 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:31:32.762856 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:31:32.762918 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:31:32.788176 1446402 cri.go:96] found id: ""
	I1222 00:31:32.788191 1446402 logs.go:282] 0 containers: []
	W1222 00:31:32.788197 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:31:32.788203 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:31:32.788264 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:31:32.815561 1446402 cri.go:96] found id: ""
	I1222 00:31:32.815575 1446402 logs.go:282] 0 containers: []
	W1222 00:31:32.815582 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:31:32.815587 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:31:32.815648 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:31:32.840208 1446402 cri.go:96] found id: ""
	I1222 00:31:32.840222 1446402 logs.go:282] 0 containers: []
	W1222 00:31:32.840229 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:31:32.840235 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:31:32.840298 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:31:32.865041 1446402 cri.go:96] found id: ""
	I1222 00:31:32.865055 1446402 logs.go:282] 0 containers: []
	W1222 00:31:32.865062 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:31:32.865068 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:31:32.865127 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:31:32.891852 1446402 cri.go:96] found id: ""
	I1222 00:31:32.891871 1446402 logs.go:282] 0 containers: []
	W1222 00:31:32.891879 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:31:32.891884 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:31:32.891956 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:31:32.916991 1446402 cri.go:96] found id: ""
	I1222 00:31:32.917005 1446402 logs.go:282] 0 containers: []
	W1222 00:31:32.917013 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:31:32.917018 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:31:32.917078 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:31:32.944551 1446402 cri.go:96] found id: ""
	I1222 00:31:32.944564 1446402 logs.go:282] 0 containers: []
	W1222 00:31:32.944571 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:31:32.944579 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:31:32.944589 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:31:33.001246 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:31:33.001270 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:31:33.021275 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:31:33.021294 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:31:33.093331 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:31:33.085315   14674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:33.086145   14674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:33.087144   14674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:33.087857   14674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:33.089415   14674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:31:33.085315   14674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:33.086145   14674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:33.087144   14674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:33.087857   14674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:33.089415   14674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:31:33.093342 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:31:33.093353 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:31:33.155921 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:31:33.155942 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:31:35.686392 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:31:35.696748 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:31:35.696809 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:31:35.721721 1446402 cri.go:96] found id: ""
	I1222 00:31:35.721736 1446402 logs.go:282] 0 containers: []
	W1222 00:31:35.721743 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:31:35.721748 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:31:35.721836 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:31:35.769211 1446402 cri.go:96] found id: ""
	I1222 00:31:35.769225 1446402 logs.go:282] 0 containers: []
	W1222 00:31:35.769232 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:31:35.769237 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:31:35.769296 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:31:35.801836 1446402 cri.go:96] found id: ""
	I1222 00:31:35.801850 1446402 logs.go:282] 0 containers: []
	W1222 00:31:35.801857 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:31:35.801863 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:31:35.801925 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:31:35.829689 1446402 cri.go:96] found id: ""
	I1222 00:31:35.829703 1446402 logs.go:282] 0 containers: []
	W1222 00:31:35.829711 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:31:35.829716 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:31:35.829775 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:31:35.855388 1446402 cri.go:96] found id: ""
	I1222 00:31:35.855403 1446402 logs.go:282] 0 containers: []
	W1222 00:31:35.855411 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:31:35.855417 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:31:35.855478 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:31:35.886055 1446402 cri.go:96] found id: ""
	I1222 00:31:35.886070 1446402 logs.go:282] 0 containers: []
	W1222 00:31:35.886105 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:31:35.886112 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:31:35.886177 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:31:35.911567 1446402 cri.go:96] found id: ""
	I1222 00:31:35.911581 1446402 logs.go:282] 0 containers: []
	W1222 00:31:35.911589 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:31:35.911596 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:31:35.911608 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:31:35.978738 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:31:35.969873   14773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:35.970644   14773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:35.972383   14773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:35.972972   14773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:35.974691   14773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:31:35.969873   14773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:35.970644   14773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:35.972383   14773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:35.972972   14773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:35.974691   14773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:31:35.978748 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:31:35.978761 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:31:36.043835 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:31:36.043857 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:31:36.072278 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:31:36.072294 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:31:36.133943 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:31:36.133963 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:31:38.650565 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:31:38.660954 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:31:38.661027 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:31:38.685765 1446402 cri.go:96] found id: ""
	I1222 00:31:38.685780 1446402 logs.go:282] 0 containers: []
	W1222 00:31:38.685787 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:31:38.685793 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:31:38.685859 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:31:38.711272 1446402 cri.go:96] found id: ""
	I1222 00:31:38.711287 1446402 logs.go:282] 0 containers: []
	W1222 00:31:38.711295 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:31:38.711300 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:31:38.711366 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:31:38.739201 1446402 cri.go:96] found id: ""
	I1222 00:31:38.739217 1446402 logs.go:282] 0 containers: []
	W1222 00:31:38.739224 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:31:38.739230 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:31:38.739299 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:31:38.769400 1446402 cri.go:96] found id: ""
	I1222 00:31:38.769414 1446402 logs.go:282] 0 containers: []
	W1222 00:31:38.769421 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:31:38.769426 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:31:38.769486 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:31:38.805681 1446402 cri.go:96] found id: ""
	I1222 00:31:38.805695 1446402 logs.go:282] 0 containers: []
	W1222 00:31:38.805704 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:31:38.805709 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:31:38.805770 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:31:38.831145 1446402 cri.go:96] found id: ""
	I1222 00:31:38.831160 1446402 logs.go:282] 0 containers: []
	W1222 00:31:38.831167 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:31:38.831172 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:31:38.831233 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:31:38.861111 1446402 cri.go:96] found id: ""
	I1222 00:31:38.861125 1446402 logs.go:282] 0 containers: []
	W1222 00:31:38.861132 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:31:38.861140 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:31:38.861150 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:31:38.917581 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:31:38.917601 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:31:38.934979 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:31:38.934997 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:31:39.009642 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:31:38.997287   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:38.997885   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:38.999441   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:39.000040   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:39.004739   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:31:38.997287   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:38.997885   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:38.999441   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:39.000040   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:39.004739   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:31:39.009654 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:31:39.009666 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:31:39.079837 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:31:39.079866 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:31:41.610509 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:31:41.620849 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:31:41.620915 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:31:41.645625 1446402 cri.go:96] found id: ""
	I1222 00:31:41.645639 1446402 logs.go:282] 0 containers: []
	W1222 00:31:41.645647 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:31:41.645652 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:31:41.645715 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:31:41.671325 1446402 cri.go:96] found id: ""
	I1222 00:31:41.671339 1446402 logs.go:282] 0 containers: []
	W1222 00:31:41.671347 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:31:41.671353 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:31:41.671413 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:31:41.695685 1446402 cri.go:96] found id: ""
	I1222 00:31:41.695699 1446402 logs.go:282] 0 containers: []
	W1222 00:31:41.695706 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:31:41.695712 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:31:41.695772 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:31:41.721021 1446402 cri.go:96] found id: ""
	I1222 00:31:41.721034 1446402 logs.go:282] 0 containers: []
	W1222 00:31:41.721042 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:31:41.721047 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:31:41.721108 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:31:41.757975 1446402 cri.go:96] found id: ""
	I1222 00:31:41.757990 1446402 logs.go:282] 0 containers: []
	W1222 00:31:41.757997 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:31:41.758002 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:31:41.758064 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:31:41.802251 1446402 cri.go:96] found id: ""
	I1222 00:31:41.802266 1446402 logs.go:282] 0 containers: []
	W1222 00:31:41.802273 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:31:41.802279 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:31:41.802339 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:31:41.835417 1446402 cri.go:96] found id: ""
	I1222 00:31:41.835433 1446402 logs.go:282] 0 containers: []
	W1222 00:31:41.835439 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:31:41.835447 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:31:41.835458 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:31:41.895808 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:31:41.895827 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:31:41.911760 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:31:41.911776 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:31:41.978878 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:31:41.969798   14988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:41.970501   14988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:41.972132   14988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:41.972665   14988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:41.974279   14988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:31:41.969798   14988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:41.970501   14988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:41.972132   14988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:41.972665   14988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:41.974279   14988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:31:41.978889 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:31:41.978900 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:31:42.043394 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:31:42.043415 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:31:44.576818 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:31:44.587175 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:31:44.587239 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:31:44.613386 1446402 cri.go:96] found id: ""
	I1222 00:31:44.613404 1446402 logs.go:282] 0 containers: []
	W1222 00:31:44.613411 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:31:44.613416 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:31:44.613479 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:31:44.642424 1446402 cri.go:96] found id: ""
	I1222 00:31:44.642444 1446402 logs.go:282] 0 containers: []
	W1222 00:31:44.642451 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:31:44.642456 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:31:44.642517 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:31:44.671623 1446402 cri.go:96] found id: ""
	I1222 00:31:44.671637 1446402 logs.go:282] 0 containers: []
	W1222 00:31:44.671645 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:31:44.671650 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:31:44.671720 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:31:44.697114 1446402 cri.go:96] found id: ""
	I1222 00:31:44.697128 1446402 logs.go:282] 0 containers: []
	W1222 00:31:44.697135 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:31:44.697140 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:31:44.697199 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:31:44.724199 1446402 cri.go:96] found id: ""
	I1222 00:31:44.724213 1446402 logs.go:282] 0 containers: []
	W1222 00:31:44.724220 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:31:44.724226 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:31:44.724298 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:31:44.765403 1446402 cri.go:96] found id: ""
	I1222 00:31:44.765417 1446402 logs.go:282] 0 containers: []
	W1222 00:31:44.765436 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:31:44.765443 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:31:44.765510 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:31:44.795984 1446402 cri.go:96] found id: ""
	I1222 00:31:44.795999 1446402 logs.go:282] 0 containers: []
	W1222 00:31:44.796017 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:31:44.796026 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:31:44.796037 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:31:44.855400 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:31:44.855420 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:31:44.872483 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:31:44.872501 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:31:44.941437 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:31:44.933277   15095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:44.933850   15095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:44.935365   15095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:44.935699   15095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:44.937038   15095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:31:44.933277   15095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:44.933850   15095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:44.935365   15095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:44.935699   15095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:44.937038   15095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:31:44.941449 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:31:44.941460 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:31:45.004528 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:31:45.004550 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:31:47.556363 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:31:47.566634 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:31:47.566695 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:31:47.593291 1446402 cri.go:96] found id: ""
	I1222 00:31:47.593305 1446402 logs.go:282] 0 containers: []
	W1222 00:31:47.593312 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:31:47.593318 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:31:47.593387 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:31:47.617921 1446402 cri.go:96] found id: ""
	I1222 00:31:47.617935 1446402 logs.go:282] 0 containers: []
	W1222 00:31:47.617942 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:31:47.617947 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:31:47.618007 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:31:47.644745 1446402 cri.go:96] found id: ""
	I1222 00:31:47.644759 1446402 logs.go:282] 0 containers: []
	W1222 00:31:47.644766 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:31:47.644772 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:31:47.644831 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:31:47.669635 1446402 cri.go:96] found id: ""
	I1222 00:31:47.669649 1446402 logs.go:282] 0 containers: []
	W1222 00:31:47.669656 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:31:47.669661 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:31:47.669721 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:31:47.696237 1446402 cri.go:96] found id: ""
	I1222 00:31:47.696251 1446402 logs.go:282] 0 containers: []
	W1222 00:31:47.696258 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:31:47.696263 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:31:47.696321 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:31:47.720858 1446402 cri.go:96] found id: ""
	I1222 00:31:47.720877 1446402 logs.go:282] 0 containers: []
	W1222 00:31:47.720884 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:31:47.720890 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:31:47.720950 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:31:47.759042 1446402 cri.go:96] found id: ""
	I1222 00:31:47.759056 1446402 logs.go:282] 0 containers: []
	W1222 00:31:47.759064 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:31:47.759071 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:31:47.759088 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:31:47.775637 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:31:47.775652 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:31:47.848304 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:31:47.837609   15197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:47.838360   15197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:47.840008   15197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:47.842174   15197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:47.842911   15197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:31:47.837609   15197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:47.838360   15197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:47.840008   15197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:47.842174   15197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:47.842911   15197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:31:47.848314 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:31:47.848326 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:31:47.910821 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:31:47.910839 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:31:47.939115 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:31:47.939131 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:31:50.495637 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:31:50.506061 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:31:50.506147 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:31:50.531619 1446402 cri.go:96] found id: ""
	I1222 00:31:50.531634 1446402 logs.go:282] 0 containers: []
	W1222 00:31:50.531641 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:31:50.531647 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:31:50.531707 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:31:50.556202 1446402 cri.go:96] found id: ""
	I1222 00:31:50.556215 1446402 logs.go:282] 0 containers: []
	W1222 00:31:50.556222 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:31:50.556228 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:31:50.556289 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:31:50.580637 1446402 cri.go:96] found id: ""
	I1222 00:31:50.580651 1446402 logs.go:282] 0 containers: []
	W1222 00:31:50.580658 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:31:50.580663 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:31:50.580726 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:31:50.605112 1446402 cri.go:96] found id: ""
	I1222 00:31:50.605126 1446402 logs.go:282] 0 containers: []
	W1222 00:31:50.605133 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:31:50.605138 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:31:50.605198 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:31:50.629268 1446402 cri.go:96] found id: ""
	I1222 00:31:50.629283 1446402 logs.go:282] 0 containers: []
	W1222 00:31:50.629290 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:31:50.629295 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:31:50.629356 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:31:50.655550 1446402 cri.go:96] found id: ""
	I1222 00:31:50.655564 1446402 logs.go:282] 0 containers: []
	W1222 00:31:50.655571 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:31:50.655576 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:31:50.655635 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:31:50.683838 1446402 cri.go:96] found id: ""
	I1222 00:31:50.683852 1446402 logs.go:282] 0 containers: []
	W1222 00:31:50.683859 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:31:50.683866 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:31:50.683877 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:31:50.739538 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:31:50.739556 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:31:50.759933 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:31:50.759948 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:31:50.837166 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:31:50.827625   15305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:50.828436   15305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:50.830329   15305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:50.830912   15305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:50.832535   15305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:31:50.827625   15305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:50.828436   15305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:50.830329   15305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:50.830912   15305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:50.832535   15305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:31:50.837177 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:31:50.837188 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:31:50.902694 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:31:50.902713 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:31:53.430394 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:31:53.441567 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:31:53.441627 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:31:53.468013 1446402 cri.go:96] found id: ""
	I1222 00:31:53.468027 1446402 logs.go:282] 0 containers: []
	W1222 00:31:53.468034 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:31:53.468039 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:31:53.468109 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:31:53.494162 1446402 cri.go:96] found id: ""
	I1222 00:31:53.494176 1446402 logs.go:282] 0 containers: []
	W1222 00:31:53.494183 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:31:53.494188 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:31:53.494248 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:31:53.524039 1446402 cri.go:96] found id: ""
	I1222 00:31:53.524061 1446402 logs.go:282] 0 containers: []
	W1222 00:31:53.524068 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:31:53.524074 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:31:53.524137 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:31:53.548965 1446402 cri.go:96] found id: ""
	I1222 00:31:53.548979 1446402 logs.go:282] 0 containers: []
	W1222 00:31:53.548987 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:31:53.548992 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:31:53.549054 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:31:53.580216 1446402 cri.go:96] found id: ""
	I1222 00:31:53.580231 1446402 logs.go:282] 0 containers: []
	W1222 00:31:53.580238 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:31:53.580244 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:31:53.580304 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:31:53.605286 1446402 cri.go:96] found id: ""
	I1222 00:31:53.605301 1446402 logs.go:282] 0 containers: []
	W1222 00:31:53.605308 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:31:53.605314 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:31:53.605391 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:31:53.630900 1446402 cri.go:96] found id: ""
	I1222 00:31:53.630915 1446402 logs.go:282] 0 containers: []
	W1222 00:31:53.630922 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:31:53.630930 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:31:53.630940 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:31:53.686921 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:31:53.686939 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:31:53.704267 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:31:53.704290 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:31:53.789032 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:31:53.778364   15410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:53.780153   15410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:53.781189   15410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:53.783313   15410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:53.784665   15410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:31:53.778364   15410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:53.780153   15410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:53.781189   15410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:53.783313   15410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:53.784665   15410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:31:53.789043 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:31:53.789054 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:31:53.855439 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:31:53.855459 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:31:56.386602 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:31:56.396636 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:31:56.396695 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:31:56.419622 1446402 cri.go:96] found id: ""
	I1222 00:31:56.419635 1446402 logs.go:282] 0 containers: []
	W1222 00:31:56.419642 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:31:56.419647 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:31:56.419711 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:31:56.443068 1446402 cri.go:96] found id: ""
	I1222 00:31:56.443082 1446402 logs.go:282] 0 containers: []
	W1222 00:31:56.443088 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:31:56.443094 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:31:56.443151 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:31:56.468547 1446402 cri.go:96] found id: ""
	I1222 00:31:56.468561 1446402 logs.go:282] 0 containers: []
	W1222 00:31:56.468568 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:31:56.468573 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:31:56.468639 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:31:56.496420 1446402 cri.go:96] found id: ""
	I1222 00:31:56.496434 1446402 logs.go:282] 0 containers: []
	W1222 00:31:56.496448 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:31:56.496453 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:31:56.496515 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:31:56.521822 1446402 cri.go:96] found id: ""
	I1222 00:31:56.521837 1446402 logs.go:282] 0 containers: []
	W1222 00:31:56.521844 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:31:56.521849 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:31:56.521910 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:31:56.548113 1446402 cri.go:96] found id: ""
	I1222 00:31:56.548127 1446402 logs.go:282] 0 containers: []
	W1222 00:31:56.548135 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:31:56.548142 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:31:56.548205 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:31:56.577150 1446402 cri.go:96] found id: ""
	I1222 00:31:56.577166 1446402 logs.go:282] 0 containers: []
	W1222 00:31:56.577173 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:31:56.577181 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:31:56.577191 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:31:56.635797 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:31:56.635817 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:31:56.651214 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:31:56.651230 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:31:56.716938 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:31:56.708096   15515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:56.708823   15515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:56.710450   15515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:56.710968   15515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:56.712514   15515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:31:56.708096   15515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:56.708823   15515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:56.710450   15515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:56.710968   15515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:56.712514   15515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:31:56.716948 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:31:56.716959 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:31:56.780730 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:31:56.780749 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:31:59.308156 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:31:59.318415 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:31:59.318476 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:31:59.343305 1446402 cri.go:96] found id: ""
	I1222 00:31:59.343319 1446402 logs.go:282] 0 containers: []
	W1222 00:31:59.343326 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:31:59.343332 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:31:59.343390 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:31:59.368501 1446402 cri.go:96] found id: ""
	I1222 00:31:59.368515 1446402 logs.go:282] 0 containers: []
	W1222 00:31:59.368523 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:31:59.368529 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:31:59.368595 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:31:59.394364 1446402 cri.go:96] found id: ""
	I1222 00:31:59.394378 1446402 logs.go:282] 0 containers: []
	W1222 00:31:59.394385 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:31:59.394391 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:31:59.394452 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:31:59.420068 1446402 cri.go:96] found id: ""
	I1222 00:31:59.420082 1446402 logs.go:282] 0 containers: []
	W1222 00:31:59.420089 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:31:59.420094 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:31:59.420160 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:31:59.444153 1446402 cri.go:96] found id: ""
	I1222 00:31:59.444167 1446402 logs.go:282] 0 containers: []
	W1222 00:31:59.444174 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:31:59.444179 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:31:59.444239 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:31:59.473812 1446402 cri.go:96] found id: ""
	I1222 00:31:59.473827 1446402 logs.go:282] 0 containers: []
	W1222 00:31:59.473834 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:31:59.473840 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:31:59.473901 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:31:59.502392 1446402 cri.go:96] found id: ""
	I1222 00:31:59.502405 1446402 logs.go:282] 0 containers: []
	W1222 00:31:59.502412 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:31:59.502420 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:31:59.502429 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:31:59.564094 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:31:59.564114 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:31:59.596168 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:31:59.596186 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:31:59.652216 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:31:59.652236 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:31:59.668263 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:31:59.668278 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:31:59.729801 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:31:59.722174   15630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:59.722594   15630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:59.724162   15630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:59.724501   15630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:59.726011   15630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:31:59.722174   15630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:59.722594   15630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:59.724162   15630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:59.724501   15630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:59.726011   15630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:32:02.230111 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:32:02.241018 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:32:02.241081 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:32:02.266489 1446402 cri.go:96] found id: ""
	I1222 00:32:02.266506 1446402 logs.go:282] 0 containers: []
	W1222 00:32:02.266514 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:32:02.266522 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:32:02.266583 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:32:02.291427 1446402 cri.go:96] found id: ""
	I1222 00:32:02.291451 1446402 logs.go:282] 0 containers: []
	W1222 00:32:02.291459 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:32:02.291465 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:32:02.291532 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:32:02.317575 1446402 cri.go:96] found id: ""
	I1222 00:32:02.317599 1446402 logs.go:282] 0 containers: []
	W1222 00:32:02.317607 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:32:02.317612 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:32:02.317683 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:32:02.346894 1446402 cri.go:96] found id: ""
	I1222 00:32:02.346918 1446402 logs.go:282] 0 containers: []
	W1222 00:32:02.346926 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:32:02.346932 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:32:02.347004 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:32:02.373650 1446402 cri.go:96] found id: ""
	I1222 00:32:02.373676 1446402 logs.go:282] 0 containers: []
	W1222 00:32:02.373683 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:32:02.373689 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:32:02.373758 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:32:02.398320 1446402 cri.go:96] found id: ""
	I1222 00:32:02.398334 1446402 logs.go:282] 0 containers: []
	W1222 00:32:02.398341 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:32:02.398347 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:32:02.398416 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:32:02.430114 1446402 cri.go:96] found id: ""
	I1222 00:32:02.430128 1446402 logs.go:282] 0 containers: []
	W1222 00:32:02.430136 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:32:02.430144 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:32:02.430154 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:32:02.485528 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:32:02.485549 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:32:02.501732 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:32:02.501748 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:32:02.566784 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:32:02.558532   15720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:02.559242   15720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:02.560819   15720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:02.561301   15720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:02.562756   15720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:32:02.558532   15720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:02.559242   15720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:02.560819   15720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:02.561301   15720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:02.562756   15720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:32:02.566793 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:32:02.566804 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:32:02.631159 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:32:02.631178 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:32:05.163426 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:32:05.173887 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:32:05.173961 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:32:05.199160 1446402 cri.go:96] found id: ""
	I1222 00:32:05.199174 1446402 logs.go:282] 0 containers: []
	W1222 00:32:05.199181 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:32:05.199187 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:32:05.199257 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:32:05.223620 1446402 cri.go:96] found id: ""
	I1222 00:32:05.223634 1446402 logs.go:282] 0 containers: []
	W1222 00:32:05.223641 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:32:05.223647 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:32:05.223706 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:32:05.248870 1446402 cri.go:96] found id: ""
	I1222 00:32:05.248885 1446402 logs.go:282] 0 containers: []
	W1222 00:32:05.248893 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:32:05.248898 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:32:05.248961 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:32:05.274824 1446402 cri.go:96] found id: ""
	I1222 00:32:05.274839 1446402 logs.go:282] 0 containers: []
	W1222 00:32:05.274846 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:32:05.274851 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:32:05.274910 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:32:05.300225 1446402 cri.go:96] found id: ""
	I1222 00:32:05.300239 1446402 logs.go:282] 0 containers: []
	W1222 00:32:05.300251 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:32:05.300257 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:32:05.300317 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:32:05.324470 1446402 cri.go:96] found id: ""
	I1222 00:32:05.324484 1446402 logs.go:282] 0 containers: []
	W1222 00:32:05.324492 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:32:05.324500 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:32:05.324563 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:32:05.352629 1446402 cri.go:96] found id: ""
	I1222 00:32:05.352647 1446402 logs.go:282] 0 containers: []
	W1222 00:32:05.352655 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:32:05.352666 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:32:05.352677 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:32:05.415991 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:32:05.416014 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:32:05.431828 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:32:05.431845 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:32:05.498339 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:32:05.489677   15826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:05.490403   15826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:05.492216   15826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:05.492835   15826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:05.494413   15826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:32:05.489677   15826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:05.490403   15826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:05.492216   15826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:05.492835   15826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:05.494413   15826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:32:05.498349 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:32:05.498364 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:32:05.563506 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:32:05.563525 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:32:08.094246 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:32:08.105089 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:32:08.105172 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:32:08.132175 1446402 cri.go:96] found id: ""
	I1222 00:32:08.132203 1446402 logs.go:282] 0 containers: []
	W1222 00:32:08.132211 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:32:08.132217 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:32:08.132280 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:32:08.158101 1446402 cri.go:96] found id: ""
	I1222 00:32:08.158115 1446402 logs.go:282] 0 containers: []
	W1222 00:32:08.158122 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:32:08.158128 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:32:08.158204 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:32:08.187238 1446402 cri.go:96] found id: ""
	I1222 00:32:08.187252 1446402 logs.go:282] 0 containers: []
	W1222 00:32:08.187259 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:32:08.187265 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:32:08.187325 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:32:08.211742 1446402 cri.go:96] found id: ""
	I1222 00:32:08.211756 1446402 logs.go:282] 0 containers: []
	W1222 00:32:08.211763 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:32:08.211768 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:32:08.211830 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:32:08.236099 1446402 cri.go:96] found id: ""
	I1222 00:32:08.236113 1446402 logs.go:282] 0 containers: []
	W1222 00:32:08.236120 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:32:08.236126 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:32:08.236199 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:32:08.261393 1446402 cri.go:96] found id: ""
	I1222 00:32:08.261407 1446402 logs.go:282] 0 containers: []
	W1222 00:32:08.261424 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:32:08.261430 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:32:08.261498 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:32:08.288417 1446402 cri.go:96] found id: ""
	I1222 00:32:08.288439 1446402 logs.go:282] 0 containers: []
	W1222 00:32:08.288447 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:32:08.288456 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:32:08.288467 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:32:08.304103 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:32:08.304124 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:32:08.368642 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:32:08.359974   15929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:08.360518   15929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:08.362313   15929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:08.362653   15929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:08.364223   15929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:32:08.359974   15929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:08.360518   15929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:08.362313   15929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:08.362653   15929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:08.364223   15929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:32:08.368652 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:32:08.368663 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:32:08.430523 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:32:08.430543 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:32:08.458205 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:32:08.458222 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:32:11.020855 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:32:11.033129 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:32:11.033201 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:32:11.063371 1446402 cri.go:96] found id: ""
	I1222 00:32:11.063385 1446402 logs.go:282] 0 containers: []
	W1222 00:32:11.063392 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:32:11.063398 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:32:11.063479 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:32:11.089853 1446402 cri.go:96] found id: ""
	I1222 00:32:11.089880 1446402 logs.go:282] 0 containers: []
	W1222 00:32:11.089891 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:32:11.089898 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:32:11.089971 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:32:11.120928 1446402 cri.go:96] found id: ""
	I1222 00:32:11.120943 1446402 logs.go:282] 0 containers: []
	W1222 00:32:11.120971 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:32:11.120978 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:32:11.121045 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:32:11.151464 1446402 cri.go:96] found id: ""
	I1222 00:32:11.151502 1446402 logs.go:282] 0 containers: []
	W1222 00:32:11.151510 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:32:11.151516 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:32:11.151589 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:32:11.179209 1446402 cri.go:96] found id: ""
	I1222 00:32:11.179224 1446402 logs.go:282] 0 containers: []
	W1222 00:32:11.179233 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:32:11.179238 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:32:11.179324 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:32:11.205945 1446402 cri.go:96] found id: ""
	I1222 00:32:11.205979 1446402 logs.go:282] 0 containers: []
	W1222 00:32:11.205987 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:32:11.205993 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:32:11.206065 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:32:11.231928 1446402 cri.go:96] found id: ""
	I1222 00:32:11.231942 1446402 logs.go:282] 0 containers: []
	W1222 00:32:11.231949 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:32:11.231957 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:32:11.231967 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:32:11.296038 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:32:11.296064 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:32:11.312748 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:32:11.312764 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:32:11.378465 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:32:11.369616   16034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:11.370358   16034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:11.372167   16034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:11.372861   16034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:11.374622   16034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:32:11.369616   16034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:11.370358   16034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:11.372167   16034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:11.372861   16034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:11.374622   16034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:32:11.378480 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:32:11.378499 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:32:11.444244 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:32:11.444264 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:32:13.977331 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:32:13.989011 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:32:13.989094 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:32:14.028691 1446402 cri.go:96] found id: ""
	I1222 00:32:14.028726 1446402 logs.go:282] 0 containers: []
	W1222 00:32:14.028734 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:32:14.028739 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:32:14.028810 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:32:14.055710 1446402 cri.go:96] found id: ""
	I1222 00:32:14.055725 1446402 logs.go:282] 0 containers: []
	W1222 00:32:14.055732 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:32:14.055738 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:32:14.055809 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:32:14.082530 1446402 cri.go:96] found id: ""
	I1222 00:32:14.082546 1446402 logs.go:282] 0 containers: []
	W1222 00:32:14.082553 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:32:14.082559 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:32:14.082625 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:32:14.107817 1446402 cri.go:96] found id: ""
	I1222 00:32:14.107840 1446402 logs.go:282] 0 containers: []
	W1222 00:32:14.107847 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:32:14.107853 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:32:14.107913 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:32:14.136680 1446402 cri.go:96] found id: ""
	I1222 00:32:14.136695 1446402 logs.go:282] 0 containers: []
	W1222 00:32:14.136701 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:32:14.136707 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:32:14.136767 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:32:14.161938 1446402 cri.go:96] found id: ""
	I1222 00:32:14.161961 1446402 logs.go:282] 0 containers: []
	W1222 00:32:14.161968 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:32:14.161974 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:32:14.162041 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:32:14.186794 1446402 cri.go:96] found id: ""
	I1222 00:32:14.186808 1446402 logs.go:282] 0 containers: []
	W1222 00:32:14.186814 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:32:14.186823 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:32:14.186832 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:32:14.242688 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:32:14.242708 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:32:14.259715 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:32:14.259732 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:32:14.326979 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:32:14.317786   16135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:14.319196   16135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:14.319865   16135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:14.321398   16135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:14.321854   16135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:32:14.317786   16135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:14.319196   16135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:14.319865   16135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:14.321398   16135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:14.321854   16135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:32:14.326990 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:32:14.327002 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:32:14.395678 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:32:14.395705 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:32:16.929785 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:32:16.940545 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:32:16.940609 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:32:16.965350 1446402 cri.go:96] found id: ""
	I1222 00:32:16.965365 1446402 logs.go:282] 0 containers: []
	W1222 00:32:16.965372 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:32:16.965378 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:32:16.965441 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:32:17.001431 1446402 cri.go:96] found id: ""
	I1222 00:32:17.001447 1446402 logs.go:282] 0 containers: []
	W1222 00:32:17.001455 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:32:17.001461 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:32:17.001530 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:32:17.045444 1446402 cri.go:96] found id: ""
	I1222 00:32:17.045459 1446402 logs.go:282] 0 containers: []
	W1222 00:32:17.045466 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:32:17.045472 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:32:17.045531 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:32:17.080407 1446402 cri.go:96] found id: ""
	I1222 00:32:17.080422 1446402 logs.go:282] 0 containers: []
	W1222 00:32:17.080429 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:32:17.080435 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:32:17.080500 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:32:17.107785 1446402 cri.go:96] found id: ""
	I1222 00:32:17.107799 1446402 logs.go:282] 0 containers: []
	W1222 00:32:17.107806 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:32:17.107812 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:32:17.107874 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:32:17.133084 1446402 cri.go:96] found id: ""
	I1222 00:32:17.133099 1446402 logs.go:282] 0 containers: []
	W1222 00:32:17.133106 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:32:17.133112 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:32:17.133170 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:32:17.162200 1446402 cri.go:96] found id: ""
	I1222 00:32:17.162215 1446402 logs.go:282] 0 containers: []
	W1222 00:32:17.162222 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:32:17.162232 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:32:17.162243 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:32:17.220080 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:32:17.220098 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:32:17.235955 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:32:17.235971 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:32:17.302399 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:32:17.293671   16241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:17.294273   16241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:17.296022   16241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:17.296566   16241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:17.298287   16241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:32:17.293671   16241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:17.294273   16241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:17.296022   16241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:17.296566   16241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:17.298287   16241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:32:17.302410 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:32:17.302420 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:32:17.365559 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:32:17.365578 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:32:19.896945 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:32:19.907830 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:32:19.907900 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:32:19.933463 1446402 cri.go:96] found id: ""
	I1222 00:32:19.933478 1446402 logs.go:282] 0 containers: []
	W1222 00:32:19.933485 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:32:19.933490 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:32:19.933556 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:32:19.958969 1446402 cri.go:96] found id: ""
	I1222 00:32:19.958983 1446402 logs.go:282] 0 containers: []
	W1222 00:32:19.958990 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:32:19.958996 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:32:19.959057 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:32:19.984725 1446402 cri.go:96] found id: ""
	I1222 00:32:19.984740 1446402 logs.go:282] 0 containers: []
	W1222 00:32:19.984748 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:32:19.984753 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:32:19.984819 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:32:20.030303 1446402 cri.go:96] found id: ""
	I1222 00:32:20.030318 1446402 logs.go:282] 0 containers: []
	W1222 00:32:20.030326 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:32:20.030332 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:32:20.030400 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:32:20.067239 1446402 cri.go:96] found id: ""
	I1222 00:32:20.067254 1446402 logs.go:282] 0 containers: []
	W1222 00:32:20.067262 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:32:20.067268 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:32:20.067336 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:32:20.094147 1446402 cri.go:96] found id: ""
	I1222 00:32:20.094161 1446402 logs.go:282] 0 containers: []
	W1222 00:32:20.094169 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:32:20.094174 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:32:20.094236 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:32:20.120347 1446402 cri.go:96] found id: ""
	I1222 00:32:20.120361 1446402 logs.go:282] 0 containers: []
	W1222 00:32:20.120369 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:32:20.120377 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:32:20.120387 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:32:20.192596 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:32:20.183539   16339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:20.184471   16339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:20.186253   16339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:20.186801   16339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:20.188449   16339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:32:20.183539   16339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:20.184471   16339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:20.186253   16339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:20.186801   16339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:20.188449   16339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:32:20.192608 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:32:20.192620 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:32:20.255011 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:32:20.255031 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:32:20.288327 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:32:20.288344 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:32:20.347178 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:32:20.347196 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:32:22.863692 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:32:22.873845 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:32:22.873915 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:32:22.898717 1446402 cri.go:96] found id: ""
	I1222 00:32:22.898737 1446402 logs.go:282] 0 containers: []
	W1222 00:32:22.898744 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:32:22.898749 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:32:22.898808 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:32:22.923719 1446402 cri.go:96] found id: ""
	I1222 00:32:22.923734 1446402 logs.go:282] 0 containers: []
	W1222 00:32:22.923741 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:32:22.923746 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:32:22.923806 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:32:22.953819 1446402 cri.go:96] found id: ""
	I1222 00:32:22.953834 1446402 logs.go:282] 0 containers: []
	W1222 00:32:22.953841 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:32:22.953847 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:32:22.953908 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:32:22.977769 1446402 cri.go:96] found id: ""
	I1222 00:32:22.977783 1446402 logs.go:282] 0 containers: []
	W1222 00:32:22.977791 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:32:22.977796 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:32:22.977858 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:32:23.011333 1446402 cri.go:96] found id: ""
	I1222 00:32:23.011348 1446402 logs.go:282] 0 containers: []
	W1222 00:32:23.011355 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:32:23.011361 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:32:23.011426 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:32:23.040887 1446402 cri.go:96] found id: ""
	I1222 00:32:23.040900 1446402 logs.go:282] 0 containers: []
	W1222 00:32:23.040907 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:32:23.040913 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:32:23.040973 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:32:23.070583 1446402 cri.go:96] found id: ""
	I1222 00:32:23.070597 1446402 logs.go:282] 0 containers: []
	W1222 00:32:23.070604 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:32:23.070612 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:32:23.070622 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:32:23.087115 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:32:23.087132 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:32:23.152903 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:32:23.144713   16447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:23.145341   16447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:23.146893   16447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:23.147470   16447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:23.149031   16447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:32:23.144713   16447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:23.145341   16447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:23.146893   16447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:23.147470   16447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:23.149031   16447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:32:23.152913 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:32:23.152924 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:32:23.215824 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:32:23.215846 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:32:23.249147 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:32:23.249175 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:32:25.810217 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:32:25.820952 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:32:25.821015 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:32:25.847989 1446402 cri.go:96] found id: ""
	I1222 00:32:25.848004 1446402 logs.go:282] 0 containers: []
	W1222 00:32:25.848011 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:32:25.848016 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:32:25.848091 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:32:25.877243 1446402 cri.go:96] found id: ""
	I1222 00:32:25.877258 1446402 logs.go:282] 0 containers: []
	W1222 00:32:25.877265 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:32:25.877271 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:32:25.877332 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:32:25.902255 1446402 cri.go:96] found id: ""
	I1222 00:32:25.902271 1446402 logs.go:282] 0 containers: []
	W1222 00:32:25.902278 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:32:25.902283 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:32:25.902344 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:32:25.927468 1446402 cri.go:96] found id: ""
	I1222 00:32:25.927482 1446402 logs.go:282] 0 containers: []
	W1222 00:32:25.927489 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:32:25.927495 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:32:25.927559 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:32:25.957558 1446402 cri.go:96] found id: ""
	I1222 00:32:25.957571 1446402 logs.go:282] 0 containers: []
	W1222 00:32:25.957578 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:32:25.957583 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:32:25.957644 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:32:25.982483 1446402 cri.go:96] found id: ""
	I1222 00:32:25.982509 1446402 logs.go:282] 0 containers: []
	W1222 00:32:25.982517 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:32:25.982523 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:32:25.982599 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:32:26.024676 1446402 cri.go:96] found id: ""
	I1222 00:32:26.024691 1446402 logs.go:282] 0 containers: []
	W1222 00:32:26.024698 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:32:26.024706 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:32:26.024724 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:32:26.087946 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:32:26.087968 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:32:26.105041 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:32:26.105066 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:32:26.171303 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:32:26.162481   16554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:26.163042   16554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:26.164767   16554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:26.165348   16554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:26.166856   16554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:32:26.162481   16554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:26.163042   16554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:26.164767   16554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:26.165348   16554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:26.166856   16554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:32:26.171313 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:32:26.171324 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:32:26.239046 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:32:26.239065 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:32:28.769012 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:32:28.779505 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:32:28.779566 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:32:28.804277 1446402 cri.go:96] found id: ""
	I1222 00:32:28.804291 1446402 logs.go:282] 0 containers: []
	W1222 00:32:28.804298 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:32:28.804303 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:32:28.804364 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:32:28.831914 1446402 cri.go:96] found id: ""
	I1222 00:32:28.831927 1446402 logs.go:282] 0 containers: []
	W1222 00:32:28.831935 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:32:28.831940 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:32:28.831999 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:32:28.858930 1446402 cri.go:96] found id: ""
	I1222 00:32:28.858951 1446402 logs.go:282] 0 containers: []
	W1222 00:32:28.858959 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:32:28.858964 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:32:28.859026 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:32:28.884503 1446402 cri.go:96] found id: ""
	I1222 00:32:28.884517 1446402 logs.go:282] 0 containers: []
	W1222 00:32:28.884524 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:32:28.884529 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:32:28.884588 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:32:28.908385 1446402 cri.go:96] found id: ""
	I1222 00:32:28.908399 1446402 logs.go:282] 0 containers: []
	W1222 00:32:28.908406 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:32:28.908412 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:32:28.908471 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:32:28.932216 1446402 cri.go:96] found id: ""
	I1222 00:32:28.932231 1446402 logs.go:282] 0 containers: []
	W1222 00:32:28.932238 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:32:28.932243 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:32:28.932318 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:32:28.960692 1446402 cri.go:96] found id: ""
	I1222 00:32:28.960706 1446402 logs.go:282] 0 containers: []
	W1222 00:32:28.960714 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:32:28.960721 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:32:28.960732 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:32:28.991268 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:32:28.991284 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:32:29.051794 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:32:29.051812 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:32:29.076793 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:32:29.076809 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:32:29.140856 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:32:29.132991   16669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:29.133597   16669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:29.135220   16669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:29.135674   16669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:29.137107   16669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:32:29.132991   16669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:29.133597   16669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:29.135220   16669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:29.135674   16669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:29.137107   16669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:32:29.140866 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:32:29.140877 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:32:31.704016 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:32:31.714529 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:32:31.714593 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:32:31.739665 1446402 cri.go:96] found id: ""
	I1222 00:32:31.739679 1446402 logs.go:282] 0 containers: []
	W1222 00:32:31.739687 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:32:31.739693 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:32:31.739753 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:32:31.764377 1446402 cri.go:96] found id: ""
	I1222 00:32:31.764391 1446402 logs.go:282] 0 containers: []
	W1222 00:32:31.764399 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:32:31.764404 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:32:31.764465 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:32:31.793617 1446402 cri.go:96] found id: ""
	I1222 00:32:31.793631 1446402 logs.go:282] 0 containers: []
	W1222 00:32:31.793638 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:32:31.793644 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:32:31.793709 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:32:31.818025 1446402 cri.go:96] found id: ""
	I1222 00:32:31.818040 1446402 logs.go:282] 0 containers: []
	W1222 00:32:31.818047 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:32:31.818055 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:32:31.818145 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:32:31.848262 1446402 cri.go:96] found id: ""
	I1222 00:32:31.848277 1446402 logs.go:282] 0 containers: []
	W1222 00:32:31.848285 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:32:31.848293 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:32:31.848357 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:32:31.873649 1446402 cri.go:96] found id: ""
	I1222 00:32:31.873663 1446402 logs.go:282] 0 containers: []
	W1222 00:32:31.873670 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:32:31.873676 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:32:31.873739 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:32:31.898375 1446402 cri.go:96] found id: ""
	I1222 00:32:31.898390 1446402 logs.go:282] 0 containers: []
	W1222 00:32:31.898397 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:32:31.898404 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:32:31.898416 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:32:31.955541 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:32:31.955560 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:32:31.971557 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:32:31.971574 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:32:32.067449 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:32:32.057015   16758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:32.057465   16758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:32.059124   16758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:32.060199   16758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:32.063114   16758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:32:32.057015   16758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:32.057465   16758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:32.059124   16758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:32.060199   16758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:32.063114   16758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:32:32.067459 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:32:32.067469 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:32:32.129846 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:32:32.129865 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:32:34.659453 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:32:34.669625 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:32:34.669685 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:32:34.696885 1446402 cri.go:96] found id: ""
	I1222 00:32:34.696900 1446402 logs.go:282] 0 containers: []
	W1222 00:32:34.696907 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:32:34.696912 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:32:34.696972 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:32:34.721026 1446402 cri.go:96] found id: ""
	I1222 00:32:34.721050 1446402 logs.go:282] 0 containers: []
	W1222 00:32:34.721058 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:32:34.721063 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:32:34.721133 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:32:34.745654 1446402 cri.go:96] found id: ""
	I1222 00:32:34.745669 1446402 logs.go:282] 0 containers: []
	W1222 00:32:34.745687 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:32:34.745692 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:32:34.745753 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:32:34.771407 1446402 cri.go:96] found id: ""
	I1222 00:32:34.771422 1446402 logs.go:282] 0 containers: []
	W1222 00:32:34.771429 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:32:34.771434 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:32:34.771502 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:32:34.795734 1446402 cri.go:96] found id: ""
	I1222 00:32:34.795749 1446402 logs.go:282] 0 containers: []
	W1222 00:32:34.795756 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:32:34.795761 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:32:34.795821 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:32:34.824632 1446402 cri.go:96] found id: ""
	I1222 00:32:34.824647 1446402 logs.go:282] 0 containers: []
	W1222 00:32:34.824664 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:32:34.824670 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:32:34.824737 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:32:34.850691 1446402 cri.go:96] found id: ""
	I1222 00:32:34.850705 1446402 logs.go:282] 0 containers: []
	W1222 00:32:34.850713 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:32:34.850721 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:32:34.850732 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:32:34.923721 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:32:34.914435   16859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:34.915282   16859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:34.916952   16859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:34.917512   16859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:34.919258   16859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:32:34.914435   16859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:34.915282   16859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:34.916952   16859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:34.917512   16859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:34.919258   16859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:32:34.923732 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:32:34.923743 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:32:34.988429 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:32:34.988447 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:32:35.032884 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:32:35.032901 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:32:35.094822 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:32:35.094842 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:32:37.611964 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:32:37.625103 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:32:37.625168 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:32:37.652713 1446402 cri.go:96] found id: ""
	I1222 00:32:37.652727 1446402 logs.go:282] 0 containers: []
	W1222 00:32:37.652734 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:32:37.652740 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:32:37.652805 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:32:37.677907 1446402 cri.go:96] found id: ""
	I1222 00:32:37.677921 1446402 logs.go:282] 0 containers: []
	W1222 00:32:37.677928 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:32:37.677934 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:32:37.677996 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:32:37.706882 1446402 cri.go:96] found id: ""
	I1222 00:32:37.706901 1446402 logs.go:282] 0 containers: []
	W1222 00:32:37.706909 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:32:37.706914 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:32:37.706973 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:32:37.734381 1446402 cri.go:96] found id: ""
	I1222 00:32:37.734396 1446402 logs.go:282] 0 containers: []
	W1222 00:32:37.734403 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:32:37.734408 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:32:37.734468 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:32:37.763444 1446402 cri.go:96] found id: ""
	I1222 00:32:37.763464 1446402 logs.go:282] 0 containers: []
	W1222 00:32:37.763483 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:32:37.763489 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:32:37.763559 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:32:37.789695 1446402 cri.go:96] found id: ""
	I1222 00:32:37.789718 1446402 logs.go:282] 0 containers: []
	W1222 00:32:37.789726 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:32:37.789732 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:32:37.789805 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:32:37.818949 1446402 cri.go:96] found id: ""
	I1222 00:32:37.818963 1446402 logs.go:282] 0 containers: []
	W1222 00:32:37.818970 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:32:37.818977 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:32:37.818989 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:32:37.886829 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:32:37.877811   16963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:37.878764   16963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:37.880313   16963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:37.880782   16963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:37.882319   16963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:32:37.877811   16963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:37.878764   16963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:37.880313   16963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:37.880782   16963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:37.882319   16963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:32:37.886840 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:32:37.886850 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:32:37.953234 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:32:37.953253 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:32:37.982264 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:32:37.982280 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:32:38.049773 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:32:38.049792 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:32:40.567633 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:32:40.577940 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:32:40.578000 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:32:40.602023 1446402 cri.go:96] found id: ""
	I1222 00:32:40.602038 1446402 logs.go:282] 0 containers: []
	W1222 00:32:40.602045 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:32:40.602051 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:32:40.602145 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:32:40.630778 1446402 cri.go:96] found id: ""
	I1222 00:32:40.630802 1446402 logs.go:282] 0 containers: []
	W1222 00:32:40.630810 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:32:40.630816 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:32:40.630877 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:32:40.658578 1446402 cri.go:96] found id: ""
	I1222 00:32:40.658592 1446402 logs.go:282] 0 containers: []
	W1222 00:32:40.658599 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:32:40.658605 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:32:40.658669 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:32:40.686369 1446402 cri.go:96] found id: ""
	I1222 00:32:40.686384 1446402 logs.go:282] 0 containers: []
	W1222 00:32:40.686393 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:32:40.686399 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:32:40.686466 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:32:40.712486 1446402 cri.go:96] found id: ""
	I1222 00:32:40.712501 1446402 logs.go:282] 0 containers: []
	W1222 00:32:40.712509 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:32:40.712514 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:32:40.712580 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:32:40.744516 1446402 cri.go:96] found id: ""
	I1222 00:32:40.744531 1446402 logs.go:282] 0 containers: []
	W1222 00:32:40.744538 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:32:40.744544 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:32:40.744609 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:32:40.770724 1446402 cri.go:96] found id: ""
	I1222 00:32:40.770738 1446402 logs.go:282] 0 containers: []
	W1222 00:32:40.770745 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:32:40.770754 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:32:40.770766 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:32:40.787581 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:32:40.787598 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:32:40.853257 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:32:40.844118   17072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:40.845204   17072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:40.846833   17072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:40.847402   17072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:40.849021   17072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:32:40.844118   17072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:40.845204   17072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:40.846833   17072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:40.847402   17072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:40.849021   17072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:32:40.853267 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:32:40.853279 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:32:40.918705 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:32:40.918728 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:32:40.947006 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:32:40.947022 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:32:43.505746 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:32:43.515847 1446402 kubeadm.go:602] duration metric: took 4m1.800425441s to restartPrimaryControlPlane
	W1222 00:32:43.515910 1446402 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1222 00:32:43.515983 1446402 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1222 00:32:43.923830 1446402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 00:32:43.937721 1446402 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1222 00:32:43.945799 1446402 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1222 00:32:43.945856 1446402 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 00:32:43.953730 1446402 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1222 00:32:43.953738 1446402 kubeadm.go:158] found existing configuration files:
	
	I1222 00:32:43.953790 1446402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1222 00:32:43.962117 1446402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1222 00:32:43.962172 1446402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1222 00:32:43.969797 1446402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1222 00:32:43.977738 1446402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1222 00:32:43.977798 1446402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 00:32:43.986214 1446402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1222 00:32:43.994326 1446402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1222 00:32:43.994386 1446402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 00:32:44.004154 1446402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1222 00:32:44.013730 1446402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1222 00:32:44.013800 1446402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 00:32:44.022121 1446402 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1222 00:32:44.061736 1446402 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1222 00:32:44.061785 1446402 kubeadm.go:319] [preflight] Running pre-flight checks
	I1222 00:32:44.140713 1446402 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1222 00:32:44.140778 1446402 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1222 00:32:44.140818 1446402 kubeadm.go:319] OS: Linux
	I1222 00:32:44.140862 1446402 kubeadm.go:319] CGROUPS_CPU: enabled
	I1222 00:32:44.140909 1446402 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1222 00:32:44.140955 1446402 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1222 00:32:44.141002 1446402 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1222 00:32:44.141048 1446402 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1222 00:32:44.141095 1446402 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1222 00:32:44.141140 1446402 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1222 00:32:44.141187 1446402 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1222 00:32:44.141232 1446402 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1222 00:32:44.208774 1446402 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1222 00:32:44.208878 1446402 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1222 00:32:44.208966 1446402 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1222 00:32:44.214899 1446402 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1222 00:32:44.218610 1446402 out.go:252]   - Generating certificates and keys ...
	I1222 00:32:44.218748 1446402 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1222 00:32:44.218821 1446402 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1222 00:32:44.218895 1446402 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1222 00:32:44.218955 1446402 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1222 00:32:44.219024 1446402 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1222 00:32:44.219076 1446402 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1222 00:32:44.219138 1446402 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1222 00:32:44.219198 1446402 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1222 00:32:44.219270 1446402 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1222 00:32:44.219343 1446402 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1222 00:32:44.219380 1446402 kubeadm.go:319] [certs] Using the existing "sa" key
	I1222 00:32:44.219458 1446402 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1222 00:32:44.443111 1446402 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1222 00:32:44.602435 1446402 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1222 00:32:44.699769 1446402 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1222 00:32:44.991502 1446402 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1222 00:32:45.160573 1446402 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1222 00:32:45.170594 1446402 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1222 00:32:45.170674 1446402 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1222 00:32:45.173883 1446402 out.go:252]   - Booting up control plane ...
	I1222 00:32:45.174024 1446402 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1222 00:32:45.174124 1446402 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1222 00:32:45.175745 1446402 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1222 00:32:45.208642 1446402 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1222 00:32:45.208749 1446402 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1222 00:32:45.228521 1446402 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1222 00:32:45.228620 1446402 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1222 00:32:45.228659 1446402 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1222 00:32:45.414555 1446402 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1222 00:32:45.414668 1446402 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1222 00:36:45.414312 1446402 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00033138s
	I1222 00:36:45.414339 1446402 kubeadm.go:319] 
	I1222 00:36:45.414437 1446402 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1222 00:36:45.414497 1446402 kubeadm.go:319] 	- The kubelet is not running
	I1222 00:36:45.414614 1446402 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1222 00:36:45.414622 1446402 kubeadm.go:319] 
	I1222 00:36:45.414721 1446402 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1222 00:36:45.414751 1446402 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1222 00:36:45.414780 1446402 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1222 00:36:45.414783 1446402 kubeadm.go:319] 
	I1222 00:36:45.419351 1446402 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1222 00:36:45.419863 1446402 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1222 00:36:45.420008 1446402 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1222 00:36:45.420300 1446402 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1222 00:36:45.420306 1446402 kubeadm.go:319] 
	I1222 00:36:45.420408 1446402 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1222 00:36:45.420558 1446402 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00033138s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1222 00:36:45.420656 1446402 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1222 00:36:45.827625 1446402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 00:36:45.841758 1446402 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1222 00:36:45.841815 1446402 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 00:36:45.850297 1446402 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1222 00:36:45.850306 1446402 kubeadm.go:158] found existing configuration files:
	
	I1222 00:36:45.850362 1446402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1222 00:36:45.858548 1446402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1222 00:36:45.858613 1446402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1222 00:36:45.866403 1446402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1222 00:36:45.875159 1446402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1222 00:36:45.875216 1446402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 00:36:45.883092 1446402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1222 00:36:45.891274 1446402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1222 00:36:45.891330 1446402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 00:36:45.899439 1446402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1222 00:36:45.907618 1446402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1222 00:36:45.907680 1446402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 00:36:45.915873 1446402 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1222 00:36:45.954554 1446402 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1222 00:36:45.954640 1446402 kubeadm.go:319] [preflight] Running pre-flight checks
	I1222 00:36:46.034225 1446402 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1222 00:36:46.034294 1446402 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1222 00:36:46.034329 1446402 kubeadm.go:319] OS: Linux
	I1222 00:36:46.034372 1446402 kubeadm.go:319] CGROUPS_CPU: enabled
	I1222 00:36:46.034419 1446402 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1222 00:36:46.034466 1446402 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1222 00:36:46.034512 1446402 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1222 00:36:46.034571 1446402 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1222 00:36:46.034626 1446402 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1222 00:36:46.034679 1446402 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1222 00:36:46.034746 1446402 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1222 00:36:46.034795 1446402 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1222 00:36:46.102483 1446402 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1222 00:36:46.102587 1446402 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1222 00:36:46.102678 1446402 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1222 00:36:46.110548 1446402 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1222 00:36:46.114145 1446402 out.go:252]   - Generating certificates and keys ...
	I1222 00:36:46.114232 1446402 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1222 00:36:46.114297 1446402 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1222 00:36:46.114378 1446402 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1222 00:36:46.114438 1446402 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1222 00:36:46.114552 1446402 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1222 00:36:46.114617 1446402 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1222 00:36:46.114681 1446402 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1222 00:36:46.114756 1446402 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1222 00:36:46.114832 1446402 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1222 00:36:46.114915 1446402 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1222 00:36:46.114959 1446402 kubeadm.go:319] [certs] Using the existing "sa" key
	I1222 00:36:46.115024 1446402 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1222 00:36:46.590004 1446402 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1222 00:36:46.981109 1446402 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1222 00:36:47.331562 1446402 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1222 00:36:47.513275 1446402 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1222 00:36:48.017649 1446402 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1222 00:36:48.018361 1446402 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1222 00:36:48.020999 1446402 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1222 00:36:48.024119 1446402 out.go:252]   - Booting up control plane ...
	I1222 00:36:48.024221 1446402 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1222 00:36:48.024298 1446402 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1222 00:36:48.024363 1446402 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1222 00:36:48.046779 1446402 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1222 00:36:48.047056 1446402 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1222 00:36:48.054716 1446402 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1222 00:36:48.055076 1446402 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1222 00:36:48.055127 1446402 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1222 00:36:48.190129 1446402 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1222 00:36:48.190242 1446402 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1222 00:40:48.190377 1446402 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000238043s
	I1222 00:40:48.190402 1446402 kubeadm.go:319] 
	I1222 00:40:48.190458 1446402 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1222 00:40:48.190495 1446402 kubeadm.go:319] 	- The kubelet is not running
	I1222 00:40:48.190599 1446402 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1222 00:40:48.190604 1446402 kubeadm.go:319] 
	I1222 00:40:48.190706 1446402 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1222 00:40:48.190737 1446402 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1222 00:40:48.190766 1446402 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1222 00:40:48.190769 1446402 kubeadm.go:319] 
	I1222 00:40:48.196227 1446402 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1222 00:40:48.196675 1446402 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1222 00:40:48.196785 1446402 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1222 00:40:48.197020 1446402 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1222 00:40:48.197025 1446402 kubeadm.go:319] 
	I1222 00:40:48.197092 1446402 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1222 00:40:48.197152 1446402 kubeadm.go:403] duration metric: took 12m6.51958097s to StartCluster
	I1222 00:40:48.197184 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:40:48.197246 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:40:48.222444 1446402 cri.go:96] found id: ""
	I1222 00:40:48.222459 1446402 logs.go:282] 0 containers: []
	W1222 00:40:48.222466 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:40:48.222472 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:40:48.222536 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:40:48.256342 1446402 cri.go:96] found id: ""
	I1222 00:40:48.256356 1446402 logs.go:282] 0 containers: []
	W1222 00:40:48.256363 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:40:48.256368 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:40:48.256430 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:40:48.285108 1446402 cri.go:96] found id: ""
	I1222 00:40:48.285122 1446402 logs.go:282] 0 containers: []
	W1222 00:40:48.285129 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:40:48.285135 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:40:48.285196 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:40:48.317753 1446402 cri.go:96] found id: ""
	I1222 00:40:48.317768 1446402 logs.go:282] 0 containers: []
	W1222 00:40:48.317775 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:40:48.317780 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:40:48.317842 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:40:48.347674 1446402 cri.go:96] found id: ""
	I1222 00:40:48.347689 1446402 logs.go:282] 0 containers: []
	W1222 00:40:48.347696 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:40:48.347701 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:40:48.347765 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:40:48.372255 1446402 cri.go:96] found id: ""
	I1222 00:40:48.372268 1446402 logs.go:282] 0 containers: []
	W1222 00:40:48.372275 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:40:48.372281 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:40:48.372339 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:40:48.396691 1446402 cri.go:96] found id: ""
	I1222 00:40:48.396705 1446402 logs.go:282] 0 containers: []
	W1222 00:40:48.396713 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:40:48.396725 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:40:48.396735 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:40:48.455513 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:40:48.455533 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:40:48.471680 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:40:48.471697 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:40:48.541459 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:40:48.532149   20896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:40:48.532736   20896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:40:48.534508   20896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:40:48.535199   20896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:40:48.536956   20896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:40:48.532149   20896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:40:48.532736   20896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:40:48.534508   20896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:40:48.535199   20896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:40:48.536956   20896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:40:48.541473 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:40:48.541483 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:40:48.603413 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:40:48.603432 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 00:40:48.631201 1446402 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000238043s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1222 00:40:48.631242 1446402 out.go:285] * 
	W1222 00:40:48.631304 1446402 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000238043s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1222 00:40:48.631321 1446402 out.go:285] * 
	W1222 00:40:48.633603 1446402 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1222 00:40:48.639700 1446402 out.go:203] 
	W1222 00:40:48.642575 1446402 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000238043s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1222 00:40:48.642620 1446402 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1222 00:40:48.642642 1446402 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1222 00:40:48.645844 1446402 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.248656814Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.248726812Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.248818752Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.248887126Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.248959487Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.249024218Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.249082229Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.249153910Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.249223252Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.249308890Z" level=info msg="Connect containerd service"
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.249702304Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.252215911Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.272726589Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.273135610Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.272971801Z" level=info msg="Start subscribing containerd event"
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.273361942Z" level=info msg="Start recovering state"
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.330860881Z" level=info msg="Start event monitor"
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.331048714Z" level=info msg="Start cni network conf syncer for default"
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.331117121Z" level=info msg="Start streaming server"
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.331184855Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.331242062Z" level=info msg="runtime interface starting up..."
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.331301705Z" level=info msg="starting plugins..."
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.331364582Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.331577047Z" level=info msg="containerd successfully booted in 0.110567s"
	Dec 22 00:28:40 functional-973657 systemd[1]: Started containerd.service - containerd container runtime.
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:40:49.865554   21020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:40:49.866699   21020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:40:49.867897   21020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:40:49.868727   21020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:40:49.870642   21020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec21 21:36] overlayfs: idmapped layers are currently not supported
	[ +34.220408] overlayfs: idmapped layers are currently not supported
	[Dec21 21:37] overlayfs: idmapped layers are currently not supported
	[Dec21 21:38] overlayfs: idmapped layers are currently not supported
	[Dec21 21:39] overlayfs: idmapped layers are currently not supported
	[ +42.728151] overlayfs: idmapped layers are currently not supported
	[Dec21 21:40] overlayfs: idmapped layers are currently not supported
	[Dec21 21:42] overlayfs: idmapped layers are currently not supported
	[Dec21 21:43] overlayfs: idmapped layers are currently not supported
	[Dec21 22:01] overlayfs: idmapped layers are currently not supported
	[Dec21 22:03] overlayfs: idmapped layers are currently not supported
	[Dec21 22:04] overlayfs: idmapped layers are currently not supported
	[Dec21 22:06] overlayfs: idmapped layers are currently not supported
	[Dec21 22:07] overlayfs: idmapped layers are currently not supported
	[Dec21 22:09] kauditd_printk_skb: 8 callbacks suppressed
	[Dec21 22:19] FS-Cache: Duplicate cookie detected
	[  +0.000799] FS-Cache: O-cookie c=000001b7 [p=00000002 fl=222 nc=0 na=1]
	[  +0.000997] FS-Cache: O-cookie d=000000006644c6a1{9P.session} n=0000000059d48210
	[  +0.001156] FS-Cache: O-key=[10] '34333231303139373837'
	[  +0.000780] FS-Cache: N-cookie c=000001b8 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000956] FS-Cache: N-cookie d=000000006644c6a1{9P.session} n=000000007a8030ee
	[  +0.001187] FS-Cache: N-key=[10] '34333231303139373837'
	[Dec22 00:03] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 00:40:49 up 1 day,  7:23,  0 user,  load average: 0.09, 0.17, 0.50
	Linux functional-973657 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 22 00:40:46 functional-973657 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 00:40:47 functional-973657 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 318.
	Dec 22 00:40:47 functional-973657 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:40:47 functional-973657 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:40:47 functional-973657 kubelet[20822]: E1222 00:40:47.537294   20822 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 00:40:47 functional-973657 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 00:40:47 functional-973657 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 00:40:48 functional-973657 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 22 00:40:48 functional-973657 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:40:48 functional-973657 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:40:48 functional-973657 kubelet[20840]: E1222 00:40:48.307321   20840 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 00:40:48 functional-973657 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 00:40:48 functional-973657 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 00:40:48 functional-973657 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 22 00:40:48 functional-973657 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:40:48 functional-973657 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:40:48 functional-973657 kubelet[20926]: E1222 00:40:48.989211   20926 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 00:40:48 functional-973657 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 00:40:48 functional-973657 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 00:40:49 functional-973657 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 22 00:40:49 functional-973657 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:40:49 functional-973657 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:40:49 functional-973657 kubelet[21003]: E1222 00:40:49.793446   21003 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 00:40:49 functional-973657 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 00:40:49 functional-973657 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-973657 -n functional-973657
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-973657 -n functional-973657: exit status 2 (333.509769ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-973657" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig (733.34s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth (2.21s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-973657 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: (dbg) Non-zero exit: kubectl --context functional-973657 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (60.601586ms)

                                                
                                                
-- stdout --
	{
	    "apiVersion": "v1",
	    "items": [],
	    "kind": "List",
	    "metadata": {
	        "resourceVersion": ""
	    }
	}

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:827: failed to get components. args "kubectl --context functional-973657 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-973657
helpers_test.go:244: (dbg) docker inspect functional-973657:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "66803363da2c02a83814dda1c0764d3abdab5acc630ac08f6a997102221d51a1",
	        "Created": "2025-12-22T00:13:58.968084222Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1435135,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-22T00:13:59.032592154Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:065a636b8735485f57df2b02ed6532902f189a9c5dc304ae0ae68a778e1c9b2c",
	        "ResolvConfPath": "/var/lib/docker/containers/66803363da2c02a83814dda1c0764d3abdab5acc630ac08f6a997102221d51a1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/66803363da2c02a83814dda1c0764d3abdab5acc630ac08f6a997102221d51a1/hostname",
	        "HostsPath": "/var/lib/docker/containers/66803363da2c02a83814dda1c0764d3abdab5acc630ac08f6a997102221d51a1/hosts",
	        "LogPath": "/var/lib/docker/containers/66803363da2c02a83814dda1c0764d3abdab5acc630ac08f6a997102221d51a1/66803363da2c02a83814dda1c0764d3abdab5acc630ac08f6a997102221d51a1-json.log",
	        "Name": "/functional-973657",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-973657:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-973657",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "66803363da2c02a83814dda1c0764d3abdab5acc630ac08f6a997102221d51a1",
	                "LowerDir": "/var/lib/docker/overlay2/5e1ad55fc7940958673405b2a5d9d7701d300a0b94ebc0c871b8eb28331634c7-init/diff:/var/lib/docker/overlay2/fdfc0dccbb3b6766b7981a5669907f62e120bbe774767ccbee34e6115374625f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5e1ad55fc7940958673405b2a5d9d7701d300a0b94ebc0c871b8eb28331634c7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5e1ad55fc7940958673405b2a5d9d7701d300a0b94ebc0c871b8eb28331634c7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5e1ad55fc7940958673405b2a5d9d7701d300a0b94ebc0c871b8eb28331634c7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-973657",
	                "Source": "/var/lib/docker/volumes/functional-973657/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-973657",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-973657",
	                "name.minikube.sigs.k8s.io": "functional-973657",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9a7415dc7cc8c69d402ee20ae768f9dea6a8f4f19a78dc21532ae8d42f3e7899",
	            "SandboxKey": "/var/run/docker/netns/9a7415dc7cc8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38390"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38391"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38394"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38392"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38393"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-973657": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ee:06:b1:ad:4a:31",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1e42e4d66b505bfd04d5446c52717be20340997733545e1d4203ef38f80c0dbb",
	                    "EndpointID": "cfb1e4a2d5409c3c16cc85466e1253884fb8124967c81f53a3b06011e2792928",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-973657",
	                        "66803363da2c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-973657 -n functional-973657
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-973657 -n functional-973657: exit status 2 (326.51771ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                         ARGS                                                                          │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-722318 image ls --format yaml --alsologtostderr                                                                                            │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ 22 Dec 25 00:13 UTC │
	│ ssh     │ functional-722318 ssh pgrep buildkitd                                                                                                                 │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │                     │
	│ image   │ functional-722318 image build -t localhost/my-image:functional-722318 testdata/build --alsologtostderr                                                │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ 22 Dec 25 00:13 UTC │
	│ image   │ functional-722318 image ls --format json --alsologtostderr                                                                                            │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ 22 Dec 25 00:13 UTC │
	│ image   │ functional-722318 image ls --format table --alsologtostderr                                                                                           │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ 22 Dec 25 00:13 UTC │
	│ image   │ functional-722318 image ls                                                                                                                            │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ 22 Dec 25 00:13 UTC │
	│ delete  │ -p functional-722318                                                                                                                                  │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ 22 Dec 25 00:13 UTC │
	│ start   │ -p functional-973657 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │                     │
	│ start   │ -p functional-973657 --alsologtostderr -v=8                                                                                                           │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:22 UTC │                     │
	│ cache   │ functional-973657 cache add registry.k8s.io/pause:3.1                                                                                                 │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:28 UTC │ 22 Dec 25 00:28 UTC │
	│ cache   │ functional-973657 cache add registry.k8s.io/pause:3.3                                                                                                 │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:28 UTC │ 22 Dec 25 00:28 UTC │
	│ cache   │ functional-973657 cache add registry.k8s.io/pause:latest                                                                                              │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:28 UTC │ 22 Dec 25 00:28 UTC │
	│ cache   │ functional-973657 cache add minikube-local-cache-test:functional-973657                                                                               │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:28 UTC │ 22 Dec 25 00:28 UTC │
	│ cache   │ functional-973657 cache delete minikube-local-cache-test:functional-973657                                                                            │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:28 UTC │ 22 Dec 25 00:28 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                      │ minikube          │ jenkins │ v1.37.0 │ 22 Dec 25 00:28 UTC │ 22 Dec 25 00:28 UTC │
	│ cache   │ list                                                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 22 Dec 25 00:28 UTC │ 22 Dec 25 00:28 UTC │
	│ ssh     │ functional-973657 ssh sudo crictl images                                                                                                              │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:28 UTC │ 22 Dec 25 00:28 UTC │
	│ ssh     │ functional-973657 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                    │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:28 UTC │ 22 Dec 25 00:28 UTC │
	│ ssh     │ functional-973657 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                               │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:28 UTC │                     │
	│ cache   │ functional-973657 cache reload                                                                                                                        │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:28 UTC │ 22 Dec 25 00:28 UTC │
	│ ssh     │ functional-973657 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                               │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:28 UTC │ 22 Dec 25 00:28 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                      │ minikube          │ jenkins │ v1.37.0 │ 22 Dec 25 00:28 UTC │ 22 Dec 25 00:28 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                   │ minikube          │ jenkins │ v1.37.0 │ 22 Dec 25 00:28 UTC │ 22 Dec 25 00:28 UTC │
	│ kubectl │ functional-973657 kubectl -- --context functional-973657 get pods                                                                                     │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:28 UTC │                     │
	│ start   │ -p functional-973657 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                              │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:28 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/22 00:28:37
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1222 00:28:37.451822 1446402 out.go:360] Setting OutFile to fd 1 ...
	I1222 00:28:37.451933 1446402 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:28:37.451942 1446402 out.go:374] Setting ErrFile to fd 2...
	I1222 00:28:37.451946 1446402 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:28:37.452197 1446402 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1395000/.minikube/bin
	I1222 00:28:37.453530 1446402 out.go:368] Setting JSON to false
	I1222 00:28:37.454369 1446402 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":112270,"bootTime":1766251047,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1222 00:28:37.454418 1446402 start.go:143] virtualization:  
	I1222 00:28:37.457786 1446402 out.go:179] * [functional-973657] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1222 00:28:37.461618 1446402 out.go:179]   - MINIKUBE_LOCATION=22179
	I1222 00:28:37.461721 1446402 notify.go:221] Checking for updates...
	I1222 00:28:37.467381 1446402 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 00:28:37.470438 1446402 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-1395000/kubeconfig
	I1222 00:28:37.473311 1446402 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1395000/.minikube
	I1222 00:28:37.476105 1446402 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1222 00:28:37.479015 1446402 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 00:28:37.482344 1446402 config.go:182] Loaded profile config "functional-973657": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1222 00:28:37.482442 1446402 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 00:28:37.509513 1446402 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1222 00:28:37.509620 1446402 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 00:28:37.577428 1446402 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-22 00:28:37.567598413 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 00:28:37.577529 1446402 docker.go:319] overlay module found
	I1222 00:28:37.580701 1446402 out.go:179] * Using the docker driver based on existing profile
	I1222 00:28:37.583433 1446402 start.go:309] selected driver: docker
	I1222 00:28:37.583443 1446402 start.go:928] validating driver "docker" against &{Name:functional-973657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-973657 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableC
oreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 00:28:37.583549 1446402 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 00:28:37.583656 1446402 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 00:28:37.637869 1446402 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-22 00:28:37.628834862 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 00:28:37.638333 1446402 start_flags.go:995] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1222 00:28:37.638357 1446402 cni.go:84] Creating CNI manager for ""
	I1222 00:28:37.638411 1446402 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1222 00:28:37.638452 1446402 start.go:353] cluster config:
	{Name:functional-973657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-973657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCo
reDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 00:28:37.641536 1446402 out.go:179] * Starting "functional-973657" primary control-plane node in "functional-973657" cluster
	I1222 00:28:37.644340 1446402 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1222 00:28:37.647258 1446402 out.go:179] * Pulling base image v0.0.48-1766219634-22260 ...
	I1222 00:28:37.650255 1446402 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1222 00:28:37.650391 1446402 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1222 00:28:37.650410 1446402 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4
	I1222 00:28:37.650417 1446402 cache.go:65] Caching tarball of preloaded images
	I1222 00:28:37.650491 1446402 preload.go:251] Found /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1222 00:28:37.650499 1446402 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on containerd
	I1222 00:28:37.650609 1446402 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/config.json ...
	I1222 00:28:37.670527 1446402 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon, skipping pull
	I1222 00:28:37.670540 1446402 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 exists in daemon, skipping load
	I1222 00:28:37.670559 1446402 cache.go:243] Successfully downloaded all kic artifacts
	I1222 00:28:37.670589 1446402 start.go:360] acquireMachinesLock for functional-973657: {Name:mk23c5c2a3abc92310900db50002bad061b76c2b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 00:28:37.670659 1446402 start.go:364] duration metric: took 50.988µs to acquireMachinesLock for "functional-973657"
	I1222 00:28:37.670679 1446402 start.go:96] Skipping create...Using existing machine configuration
	I1222 00:28:37.670683 1446402 fix.go:54] fixHost starting: 
	I1222 00:28:37.670937 1446402 cli_runner.go:164] Run: docker container inspect functional-973657 --format={{.State.Status}}
	I1222 00:28:37.688276 1446402 fix.go:112] recreateIfNeeded on functional-973657: state=Running err=<nil>
	W1222 00:28:37.688299 1446402 fix.go:138] unexpected machine state, will restart: <nil>
	I1222 00:28:37.691627 1446402 out.go:252] * Updating the running docker "functional-973657" container ...
	I1222 00:28:37.691654 1446402 machine.go:94] provisionDockerMachine start ...
	I1222 00:28:37.691736 1446402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
	I1222 00:28:37.709165 1446402 main.go:144] libmachine: Using SSH client type: native
	I1222 00:28:37.709504 1446402 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38390 <nil> <nil>}
	I1222 00:28:37.709511 1446402 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 00:28:37.842221 1446402 main.go:144] libmachine: SSH cmd err, output: <nil>: functional-973657
	
	I1222 00:28:37.842236 1446402 ubuntu.go:182] provisioning hostname "functional-973657"
	I1222 00:28:37.842299 1446402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
	I1222 00:28:37.861944 1446402 main.go:144] libmachine: Using SSH client type: native
	I1222 00:28:37.862401 1446402 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38390 <nil> <nil>}
	I1222 00:28:37.862411 1446402 main.go:144] libmachine: About to run SSH command:
	sudo hostname functional-973657 && echo "functional-973657" | sudo tee /etc/hostname
	I1222 00:28:38.004653 1446402 main.go:144] libmachine: SSH cmd err, output: <nil>: functional-973657
	
	I1222 00:28:38.004757 1446402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
	I1222 00:28:38.029552 1446402 main.go:144] libmachine: Using SSH client type: native
	I1222 00:28:38.029903 1446402 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38390 <nil> <nil>}
	I1222 00:28:38.029921 1446402 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-973657' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-973657/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-973657' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1222 00:28:38.166540 1446402 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 00:28:38.166558 1446402 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22179-1395000/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-1395000/.minikube}
	I1222 00:28:38.166588 1446402 ubuntu.go:190] setting up certificates
	I1222 00:28:38.166605 1446402 provision.go:84] configureAuth start
	I1222 00:28:38.166666 1446402 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-973657
	I1222 00:28:38.184810 1446402 provision.go:143] copyHostCerts
	I1222 00:28:38.184868 1446402 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1395000/.minikube/cert.pem, removing ...
	I1222 00:28:38.184883 1446402 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1395000/.minikube/cert.pem
	I1222 00:28:38.184958 1446402 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-1395000/.minikube/cert.pem (1123 bytes)
	I1222 00:28:38.185063 1446402 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1395000/.minikube/key.pem, removing ...
	I1222 00:28:38.185068 1446402 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1395000/.minikube/key.pem
	I1222 00:28:38.185094 1446402 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-1395000/.minikube/key.pem (1679 bytes)
	I1222 00:28:38.185151 1446402 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.pem, removing ...
	I1222 00:28:38.185154 1446402 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.pem
	I1222 00:28:38.185176 1446402 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.pem (1082 bytes)
	I1222 00:28:38.185228 1446402 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca-key.pem org=jenkins.functional-973657 san=[127.0.0.1 192.168.49.2 functional-973657 localhost minikube]
	I1222 00:28:38.572282 1446402 provision.go:177] copyRemoteCerts
	I1222 00:28:38.572338 1446402 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1222 00:28:38.572378 1446402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
	I1222 00:28:38.590440 1446402 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38390 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/functional-973657/id_rsa Username:docker}
	I1222 00:28:38.686182 1446402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1222 00:28:38.704460 1446402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1222 00:28:38.721777 1446402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1222 00:28:38.739280 1446402 provision.go:87] duration metric: took 572.652959ms to configureAuth
	I1222 00:28:38.739299 1446402 ubuntu.go:206] setting minikube options for container-runtime
	I1222 00:28:38.739484 1446402 config.go:182] Loaded profile config "functional-973657": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1222 00:28:38.739490 1446402 machine.go:97] duration metric: took 1.047830613s to provisionDockerMachine
	I1222 00:28:38.739496 1446402 start.go:293] postStartSetup for "functional-973657" (driver="docker")
	I1222 00:28:38.739506 1446402 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1222 00:28:38.739568 1446402 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1222 00:28:38.739605 1446402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
	I1222 00:28:38.761201 1446402 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38390 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/functional-973657/id_rsa Username:docker}
	I1222 00:28:38.864350 1446402 ssh_runner.go:195] Run: cat /etc/os-release
	I1222 00:28:38.868359 1446402 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1222 00:28:38.868379 1446402 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1222 00:28:38.868390 1446402 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1395000/.minikube/addons for local assets ...
	I1222 00:28:38.868447 1446402 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1395000/.minikube/files for local assets ...
	I1222 00:28:38.868524 1446402 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem -> 13968642.pem in /etc/ssl/certs
	I1222 00:28:38.868598 1446402 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/test/nested/copy/1396864/hosts -> hosts in /etc/test/nested/copy/1396864
	I1222 00:28:38.868641 1446402 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1396864
	I1222 00:28:38.878975 1446402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem --> /etc/ssl/certs/13968642.pem (1708 bytes)
	I1222 00:28:38.897171 1446402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/test/nested/copy/1396864/hosts --> /etc/test/nested/copy/1396864/hosts (40 bytes)
	I1222 00:28:38.915159 1446402 start.go:296] duration metric: took 175.648245ms for postStartSetup
	I1222 00:28:38.915247 1446402 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 00:28:38.915286 1446402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
	I1222 00:28:38.933740 1446402 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38390 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/functional-973657/id_rsa Username:docker}
	I1222 00:28:39.031561 1446402 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1222 00:28:39.036720 1446402 fix.go:56] duration metric: took 1.366028879s for fixHost
	I1222 00:28:39.036736 1446402 start.go:83] releasing machines lock for "functional-973657", held for 1.366069585s
	I1222 00:28:39.036807 1446402 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-973657
	I1222 00:28:39.056063 1446402 ssh_runner.go:195] Run: cat /version.json
	I1222 00:28:39.056131 1446402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
	I1222 00:28:39.056209 1446402 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1222 00:28:39.056284 1446402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
	I1222 00:28:39.084466 1446402 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38390 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/functional-973657/id_rsa Username:docker}
	I1222 00:28:39.086214 1446402 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38390 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/functional-973657/id_rsa Username:docker}
	I1222 00:28:39.182487 1446402 ssh_runner.go:195] Run: systemctl --version
	I1222 00:28:39.277379 1446402 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1222 00:28:39.281860 1446402 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1222 00:28:39.281935 1446402 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1222 00:28:39.290006 1446402 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1222 00:28:39.290021 1446402 start.go:496] detecting cgroup driver to use...
	I1222 00:28:39.290053 1446402 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 00:28:39.290134 1446402 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1222 00:28:39.305829 1446402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1222 00:28:39.319320 1446402 docker.go:218] disabling cri-docker service (if available) ...
	I1222 00:28:39.319374 1446402 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1222 00:28:39.335346 1446402 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1222 00:28:39.349145 1446402 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1222 00:28:39.473478 1446402 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1222 00:28:39.618008 1446402 docker.go:234] disabling docker service ...
	I1222 00:28:39.618090 1446402 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1222 00:28:39.634656 1446402 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1222 00:28:39.647677 1446402 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1222 00:28:39.771400 1446402 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1222 00:28:39.894302 1446402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1222 00:28:39.907014 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 00:28:39.920771 1446402 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1222 00:28:39.929451 1446402 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1222 00:28:39.938829 1446402 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1222 00:28:39.938905 1446402 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1222 00:28:39.947569 1446402 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1222 00:28:39.956482 1446402 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1222 00:28:39.965074 1446402 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1222 00:28:39.973881 1446402 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1222 00:28:39.981977 1446402 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1222 00:28:39.990962 1446402 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1222 00:28:39.999843 1446402 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1222 00:28:40.013571 1446402 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1222 00:28:40.024830 1446402 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1222 00:28:40.034498 1446402 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 00:28:40.154100 1446402 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1222 00:28:40.334682 1446402 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1222 00:28:40.334744 1446402 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1222 00:28:40.338667 1446402 start.go:564] Will wait 60s for crictl version
	I1222 00:28:40.338723 1446402 ssh_runner.go:195] Run: which crictl
	I1222 00:28:40.342335 1446402 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1222 00:28:40.367245 1446402 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.1
	RuntimeApiVersion:  v1
	I1222 00:28:40.367308 1446402 ssh_runner.go:195] Run: containerd --version
	I1222 00:28:40.389012 1446402 ssh_runner.go:195] Run: containerd --version
	I1222 00:28:40.418027 1446402 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.1 ...
	I1222 00:28:40.420898 1446402 cli_runner.go:164] Run: docker network inspect functional-973657 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 00:28:40.437638 1446402 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1222 00:28:40.444854 1446402 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1222 00:28:40.447771 1446402 kubeadm.go:884] updating cluster {Name:functional-973657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-973657 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1222 00:28:40.447915 1446402 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1222 00:28:40.447997 1446402 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 00:28:40.473338 1446402 containerd.go:627] all images are preloaded for containerd runtime.
	I1222 00:28:40.473351 1446402 containerd.go:534] Images already preloaded, skipping extraction
	I1222 00:28:40.473409 1446402 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 00:28:40.498366 1446402 containerd.go:627] all images are preloaded for containerd runtime.
	I1222 00:28:40.498377 1446402 cache_images.go:86] Images are preloaded, skipping loading
	I1222 00:28:40.498383 1446402 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 containerd true true} ...
	I1222 00:28:40.498490 1446402 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-973657 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-973657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1222 00:28:40.498554 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
	I1222 00:28:40.524507 1446402 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1222 00:28:40.524524 1446402 cni.go:84] Creating CNI manager for ""
	I1222 00:28:40.524533 1446402 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1222 00:28:40.524546 1446402 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1222 00:28:40.524568 1446402 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-973657 NodeName:functional-973657 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false Kubelet
ConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1222 00:28:40.524688 1446402 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-973657"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1222 00:28:40.524764 1446402 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1222 00:28:40.533361 1446402 binaries.go:51] Found k8s binaries, skipping transfer
	I1222 00:28:40.533424 1446402 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1222 00:28:40.541244 1446402 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1222 00:28:40.555755 1446402 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1222 00:28:40.568267 1446402 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2085 bytes)
	I1222 00:28:40.581122 1446402 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1222 00:28:40.585058 1446402 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 00:28:40.703120 1446402 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 00:28:40.989767 1446402 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657 for IP: 192.168.49.2
	I1222 00:28:40.989777 1446402 certs.go:195] generating shared ca certs ...
	I1222 00:28:40.989791 1446402 certs.go:227] acquiring lock for ca certs: {Name:mk4e1172e73c8d9b926824a39d7e920772302ed7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:28:40.989935 1446402 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.key
	I1222 00:28:40.989982 1446402 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.key
	I1222 00:28:40.989987 1446402 certs.go:257] generating profile certs ...
	I1222 00:28:40.990067 1446402 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/client.key
	I1222 00:28:40.990138 1446402 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/apiserver.key.ec70d081
	I1222 00:28:40.990175 1446402 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/proxy-client.key
	I1222 00:28:40.990291 1446402 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/1396864.pem (1338 bytes)
	W1222 00:28:40.990321 1446402 certs.go:480] ignoring /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/1396864_empty.pem, impossibly tiny 0 bytes
	I1222 00:28:40.990328 1446402 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca-key.pem (1675 bytes)
	I1222 00:28:40.990354 1446402 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem (1082 bytes)
	I1222 00:28:40.990377 1446402 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem (1123 bytes)
	I1222 00:28:40.990400 1446402 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/key.pem (1679 bytes)
	I1222 00:28:40.990449 1446402 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem (1708 bytes)
	I1222 00:28:40.991096 1446402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1222 00:28:41.014750 1446402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1222 00:28:41.036655 1446402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1222 00:28:41.057901 1446402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1222 00:28:41.075308 1446402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1222 00:28:41.092360 1446402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1222 00:28:41.110513 1446402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1222 00:28:41.128091 1446402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1222 00:28:41.145457 1446402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1222 00:28:41.163271 1446402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/1396864.pem --> /usr/share/ca-certificates/1396864.pem (1338 bytes)
	I1222 00:28:41.181040 1446402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem --> /usr/share/ca-certificates/13968642.pem (1708 bytes)
	I1222 00:28:41.199219 1446402 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1222 00:28:41.211792 1446402 ssh_runner.go:195] Run: openssl version
	I1222 00:28:41.217908 1446402 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:28:41.225276 1446402 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1222 00:28:41.232519 1446402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:28:41.236312 1446402 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 00:04 /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:28:41.236370 1446402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:28:41.277548 1446402 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1222 00:28:41.285110 1446402 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1396864.pem
	I1222 00:28:41.292519 1446402 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1396864.pem /etc/ssl/certs/1396864.pem
	I1222 00:28:41.300133 1446402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1396864.pem
	I1222 00:28:41.304025 1446402 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 00:13 /usr/share/ca-certificates/1396864.pem
	I1222 00:28:41.304090 1446402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1396864.pem
	I1222 00:28:41.345481 1446402 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1222 00:28:41.353129 1446402 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/13968642.pem
	I1222 00:28:41.360704 1446402 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/13968642.pem /etc/ssl/certs/13968642.pem
	I1222 00:28:41.368364 1446402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13968642.pem
	I1222 00:28:41.372067 1446402 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 00:13 /usr/share/ca-certificates/13968642.pem
	I1222 00:28:41.372146 1446402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13968642.pem
	I1222 00:28:41.413233 1446402 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1222 00:28:41.421216 1446402 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 00:28:41.424941 1446402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1222 00:28:41.465845 1446402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1222 00:28:41.509256 1446402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1222 00:28:41.550176 1446402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1222 00:28:41.591240 1446402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1222 00:28:41.636957 1446402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1222 00:28:41.677583 1446402 kubeadm.go:401] StartCluster: {Name:functional-973657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-973657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 00:28:41.677666 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1222 00:28:41.677732 1446402 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 00:28:41.707257 1446402 cri.go:96] found id: ""
	I1222 00:28:41.707323 1446402 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1222 00:28:41.715403 1446402 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1222 00:28:41.715412 1446402 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1222 00:28:41.715487 1446402 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1222 00:28:41.722811 1446402 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1222 00:28:41.723316 1446402 kubeconfig.go:125] found "functional-973657" server: "https://192.168.49.2:8441"
	I1222 00:28:41.724615 1446402 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1222 00:28:41.732758 1446402 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-22 00:14:06.897851329 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-22 00:28:40.577260246 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1222 00:28:41.732777 1446402 kubeadm.go:1161] stopping kube-system containers ...
	I1222 00:28:41.732788 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1222 00:28:41.732853 1446402 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 00:28:41.777317 1446402 cri.go:96] found id: ""
	I1222 00:28:41.777381 1446402 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1222 00:28:41.795672 1446402 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 00:28:41.803787 1446402 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Dec 22 00:18 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec 22 00:18 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5676 Dec 22 00:18 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Dec 22 00:18 /etc/kubernetes/scheduler.conf
	
	I1222 00:28:41.803861 1446402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1222 00:28:41.811861 1446402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1222 00:28:41.819685 1446402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1222 00:28:41.819741 1446402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 00:28:41.827761 1446402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1222 00:28:41.835493 1446402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1222 00:28:41.835553 1446402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 00:28:41.843556 1446402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1222 00:28:41.851531 1446402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1222 00:28:41.851587 1446402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 00:28:41.860145 1446402 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1222 00:28:41.868219 1446402 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1222 00:28:41.913117 1446402 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1222 00:28:43.003962 1446402 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.090816856s)
	I1222 00:28:43.004040 1446402 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1222 00:28:43.212066 1446402 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1222 00:28:43.273727 1446402 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1222 00:28:43.319285 1446402 api_server.go:52] waiting for apiserver process to appear ...
	I1222 00:28:43.319357 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:43.819515 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:44.319574 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:44.820396 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:45.320627 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:45.819505 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:46.320284 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:46.820238 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:47.320289 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:47.819431 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:48.319438 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:48.820203 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:49.320163 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:49.820253 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:50.320340 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:50.820353 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:51.320143 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:51.819557 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:52.319533 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:52.819532 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:53.319872 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:53.819550 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:54.320283 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:54.820042 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:55.319836 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:55.820287 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:56.320324 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:56.819506 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:57.320256 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:57.820518 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:58.320250 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:58.819713 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:59.319563 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:59.820373 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:00.320250 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:00.819558 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:01.320363 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:01.820455 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:02.320264 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:02.820241 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:03.320188 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:03.820211 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:04.319540 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:04.819438 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:05.320247 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:05.820436 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:06.320370 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:06.819539 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:07.319751 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:07.820258 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:08.319764 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:08.820469 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:09.319565 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:09.819562 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:10.319521 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:10.819559 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:11.319690 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:11.819773 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:12.319579 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:12.820346 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:13.320217 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:13.820210 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:14.320172 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:14.819550 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:15.319430 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:15.820196 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:16.319448 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:16.819507 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:17.320526 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:17.819522 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:18.319482 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:18.820476 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:19.319544 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:19.820495 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:20.319558 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:20.820340 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:21.320236 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:21.820518 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:22.319699 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:22.819573 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:23.319567 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:23.819533 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:24.319887 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:24.819624 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:25.320279 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:25.820331 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:26.320411 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:26.819541 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:27.320442 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:27.819562 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:28.319550 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:28.820464 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:29.320504 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:29.819508 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:30.319443 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:30.819528 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:31.319503 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:31.819888 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:32.319676 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:32.819521 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:33.319477 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:33.819820 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:34.319851 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:34.819577 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:35.320381 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:35.820397 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:36.320202 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:36.820411 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:37.319449 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:37.819535 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:38.319499 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:38.820465 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:39.319496 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:39.819562 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:40.319552 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:40.819553 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:41.319757 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:41.820402 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:42.319587 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:42.820218 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:43.320359 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:29:43.320440 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:29:43.346534 1446402 cri.go:96] found id: ""
	I1222 00:29:43.346547 1446402 logs.go:282] 0 containers: []
	W1222 00:29:43.346555 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:29:43.346560 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:29:43.346649 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:29:43.373797 1446402 cri.go:96] found id: ""
	I1222 00:29:43.373813 1446402 logs.go:282] 0 containers: []
	W1222 00:29:43.373820 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:29:43.373825 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:29:43.373887 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:29:43.399270 1446402 cri.go:96] found id: ""
	I1222 00:29:43.399284 1446402 logs.go:282] 0 containers: []
	W1222 00:29:43.399291 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:29:43.399296 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:29:43.399363 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:29:43.423840 1446402 cri.go:96] found id: ""
	I1222 00:29:43.423855 1446402 logs.go:282] 0 containers: []
	W1222 00:29:43.423862 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:29:43.423868 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:29:43.423926 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:29:43.447537 1446402 cri.go:96] found id: ""
	I1222 00:29:43.447551 1446402 logs.go:282] 0 containers: []
	W1222 00:29:43.447558 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:29:43.447564 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:29:43.447626 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:29:43.474001 1446402 cri.go:96] found id: ""
	I1222 00:29:43.474016 1446402 logs.go:282] 0 containers: []
	W1222 00:29:43.474024 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:29:43.474029 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:29:43.474123 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:29:43.502707 1446402 cri.go:96] found id: ""
	I1222 00:29:43.502721 1446402 logs.go:282] 0 containers: []
	W1222 00:29:43.502728 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:29:43.502736 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:29:43.502746 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:29:43.560014 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:29:43.560034 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:29:43.575973 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:29:43.575990 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:29:43.644984 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:29:43.636222   10809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:43.636633   10809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:43.638265   10809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:43.638673   10809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:43.640308   10809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:29:43.636222   10809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:43.636633   10809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:43.638265   10809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:43.638673   10809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:43.640308   10809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:29:43.644996 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:29:43.645007 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:29:43.711821 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:29:43.711841 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:29:46.243876 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:46.255639 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:29:46.255701 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:29:46.285594 1446402 cri.go:96] found id: ""
	I1222 00:29:46.285608 1446402 logs.go:282] 0 containers: []
	W1222 00:29:46.285615 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:29:46.285621 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:29:46.285685 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:29:46.313654 1446402 cri.go:96] found id: ""
	I1222 00:29:46.313669 1446402 logs.go:282] 0 containers: []
	W1222 00:29:46.313676 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:29:46.313694 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:29:46.313755 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:29:46.339799 1446402 cri.go:96] found id: ""
	I1222 00:29:46.339815 1446402 logs.go:282] 0 containers: []
	W1222 00:29:46.339822 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:29:46.339828 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:29:46.339891 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:29:46.365156 1446402 cri.go:96] found id: ""
	I1222 00:29:46.365184 1446402 logs.go:282] 0 containers: []
	W1222 00:29:46.365192 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:29:46.365198 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:29:46.365265 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:29:46.394145 1446402 cri.go:96] found id: ""
	I1222 00:29:46.394159 1446402 logs.go:282] 0 containers: []
	W1222 00:29:46.394167 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:29:46.394172 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:29:46.394233 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:29:46.418776 1446402 cri.go:96] found id: ""
	I1222 00:29:46.418790 1446402 logs.go:282] 0 containers: []
	W1222 00:29:46.418797 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:29:46.418803 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:29:46.418864 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:29:46.442806 1446402 cri.go:96] found id: ""
	I1222 00:29:46.442820 1446402 logs.go:282] 0 containers: []
	W1222 00:29:46.442828 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:29:46.442841 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:29:46.442851 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:29:46.499137 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:29:46.499157 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:29:46.515023 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:29:46.515038 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:29:46.583664 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:29:46.574797   10913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:46.575467   10913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:46.577211   10913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:46.577813   10913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:46.579510   10913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:29:46.574797   10913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:46.575467   10913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:46.577211   10913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:46.577813   10913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:46.579510   10913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:29:46.583675 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:29:46.583687 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:29:46.647550 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:29:46.647569 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:29:49.182538 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:49.192713 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:29:49.192773 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:29:49.216898 1446402 cri.go:96] found id: ""
	I1222 00:29:49.216912 1446402 logs.go:282] 0 containers: []
	W1222 00:29:49.216919 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:29:49.216924 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:29:49.216980 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:29:49.249605 1446402 cri.go:96] found id: ""
	I1222 00:29:49.249618 1446402 logs.go:282] 0 containers: []
	W1222 00:29:49.249626 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:29:49.249631 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:29:49.249690 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:29:49.280524 1446402 cri.go:96] found id: ""
	I1222 00:29:49.280539 1446402 logs.go:282] 0 containers: []
	W1222 00:29:49.280546 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:29:49.280552 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:29:49.280611 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:29:49.311301 1446402 cri.go:96] found id: ""
	I1222 00:29:49.311315 1446402 logs.go:282] 0 containers: []
	W1222 00:29:49.311323 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:29:49.311327 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:29:49.311385 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:29:49.336538 1446402 cri.go:96] found id: ""
	I1222 00:29:49.336551 1446402 logs.go:282] 0 containers: []
	W1222 00:29:49.336559 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:29:49.336564 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:29:49.336624 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:29:49.364232 1446402 cri.go:96] found id: ""
	I1222 00:29:49.364247 1446402 logs.go:282] 0 containers: []
	W1222 00:29:49.364256 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:29:49.364262 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:29:49.364326 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:29:49.388613 1446402 cri.go:96] found id: ""
	I1222 00:29:49.388638 1446402 logs.go:282] 0 containers: []
	W1222 00:29:49.388646 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:29:49.388654 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:29:49.388664 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:29:49.451680 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:29:49.443682   11010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:49.444280   11010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:49.445741   11010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:49.446220   11010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:49.447728   11010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:29:49.443682   11010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:49.444280   11010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:49.445741   11010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:49.446220   11010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:49.447728   11010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:29:49.451690 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:29:49.451701 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:29:49.514558 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:29:49.514577 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:29:49.543077 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:29:49.543095 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:29:49.600979 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:29:49.600997 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:29:52.116977 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:52.127516 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:29:52.127578 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:29:52.154761 1446402 cri.go:96] found id: ""
	I1222 00:29:52.154783 1446402 logs.go:282] 0 containers: []
	W1222 00:29:52.154790 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:29:52.154796 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:29:52.154857 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:29:52.180288 1446402 cri.go:96] found id: ""
	I1222 00:29:52.180303 1446402 logs.go:282] 0 containers: []
	W1222 00:29:52.180310 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:29:52.180316 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:29:52.180376 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:29:52.208439 1446402 cri.go:96] found id: ""
	I1222 00:29:52.208454 1446402 logs.go:282] 0 containers: []
	W1222 00:29:52.208461 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:29:52.208466 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:29:52.208527 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:29:52.233901 1446402 cri.go:96] found id: ""
	I1222 00:29:52.233914 1446402 logs.go:282] 0 containers: []
	W1222 00:29:52.233932 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:29:52.233938 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:29:52.234004 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:29:52.269797 1446402 cri.go:96] found id: ""
	I1222 00:29:52.269821 1446402 logs.go:282] 0 containers: []
	W1222 00:29:52.269829 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:29:52.269835 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:29:52.269901 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:29:52.297204 1446402 cri.go:96] found id: ""
	I1222 00:29:52.297219 1446402 logs.go:282] 0 containers: []
	W1222 00:29:52.297236 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:29:52.297242 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:29:52.297308 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:29:52.326411 1446402 cri.go:96] found id: ""
	I1222 00:29:52.326425 1446402 logs.go:282] 0 containers: []
	W1222 00:29:52.326433 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:29:52.326440 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:29:52.326450 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:29:52.387688 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:29:52.379564   11112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:52.380283   11112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:52.381856   11112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:52.382478   11112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:52.383956   11112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:29:52.379564   11112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:52.380283   11112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:52.381856   11112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:52.382478   11112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:52.383956   11112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:29:52.387700 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:29:52.387716 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:29:52.453506 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:29:52.453524 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:29:52.483252 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:29:52.483269 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:29:52.540786 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:29:52.540804 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:29:55.056509 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:55.067103 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:29:55.067178 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:29:55.093620 1446402 cri.go:96] found id: ""
	I1222 00:29:55.093649 1446402 logs.go:282] 0 containers: []
	W1222 00:29:55.093656 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:29:55.093663 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:29:55.093734 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:29:55.128411 1446402 cri.go:96] found id: ""
	I1222 00:29:55.128424 1446402 logs.go:282] 0 containers: []
	W1222 00:29:55.128432 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:29:55.128436 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:29:55.128504 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:29:55.154633 1446402 cri.go:96] found id: ""
	I1222 00:29:55.154646 1446402 logs.go:282] 0 containers: []
	W1222 00:29:55.154654 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:29:55.154659 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:29:55.154730 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:29:55.181169 1446402 cri.go:96] found id: ""
	I1222 00:29:55.181184 1446402 logs.go:282] 0 containers: []
	W1222 00:29:55.181191 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:29:55.181197 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:29:55.181256 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:29:55.206353 1446402 cri.go:96] found id: ""
	I1222 00:29:55.206367 1446402 logs.go:282] 0 containers: []
	W1222 00:29:55.206374 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:29:55.206379 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:29:55.206439 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:29:55.234930 1446402 cri.go:96] found id: ""
	I1222 00:29:55.234963 1446402 logs.go:282] 0 containers: []
	W1222 00:29:55.234971 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:29:55.234977 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:29:55.235052 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:29:55.269275 1446402 cri.go:96] found id: ""
	I1222 00:29:55.269290 1446402 logs.go:282] 0 containers: []
	W1222 00:29:55.269298 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:29:55.269306 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:29:55.269316 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:29:55.332423 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:29:55.332442 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:29:55.348393 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:29:55.348409 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:29:55.411746 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:29:55.403223   11224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:55.403831   11224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:55.405510   11224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:55.406036   11224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:55.407668   11224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:29:55.403223   11224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:55.403831   11224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:55.405510   11224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:55.406036   11224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:55.407668   11224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:29:55.411756 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:29:55.411767 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:29:55.478898 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:29:55.478918 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:29:58.007945 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:58.028590 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:29:58.028654 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:29:58.053263 1446402 cri.go:96] found id: ""
	I1222 00:29:58.053277 1446402 logs.go:282] 0 containers: []
	W1222 00:29:58.053284 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:29:58.053290 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:29:58.053349 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:29:58.078650 1446402 cri.go:96] found id: ""
	I1222 00:29:58.078664 1446402 logs.go:282] 0 containers: []
	W1222 00:29:58.078671 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:29:58.078676 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:29:58.078746 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:29:58.104284 1446402 cri.go:96] found id: ""
	I1222 00:29:58.104298 1446402 logs.go:282] 0 containers: []
	W1222 00:29:58.104305 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:29:58.104310 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:29:58.104372 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:29:58.133078 1446402 cri.go:96] found id: ""
	I1222 00:29:58.133103 1446402 logs.go:282] 0 containers: []
	W1222 00:29:58.133110 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:29:58.133116 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:29:58.133194 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:29:58.160079 1446402 cri.go:96] found id: ""
	I1222 00:29:58.160092 1446402 logs.go:282] 0 containers: []
	W1222 00:29:58.160100 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:29:58.160105 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:29:58.160209 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:29:58.184050 1446402 cri.go:96] found id: ""
	I1222 00:29:58.184070 1446402 logs.go:282] 0 containers: []
	W1222 00:29:58.184091 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:29:58.184098 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:29:58.184161 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:29:58.207826 1446402 cri.go:96] found id: ""
	I1222 00:29:58.207840 1446402 logs.go:282] 0 containers: []
	W1222 00:29:58.207847 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:29:58.207854 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:29:58.207864 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:29:58.275859 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:29:58.275886 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:29:58.308307 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:29:58.308324 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:29:58.365952 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:29:58.365971 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:29:58.381771 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:29:58.381788 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:29:58.449730 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:29:58.441326   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:58.442009   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:58.443754   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:58.444437   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:58.446028   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:29:58.441326   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:58.442009   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:58.443754   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:58.444437   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:58.446028   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:30:00.951841 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:30:00.968627 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:30:00.968704 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:30:01.017629 1446402 cri.go:96] found id: ""
	I1222 00:30:01.017648 1446402 logs.go:282] 0 containers: []
	W1222 00:30:01.017657 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:30:01.017665 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:30:01.017745 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:30:01.052801 1446402 cri.go:96] found id: ""
	I1222 00:30:01.052819 1446402 logs.go:282] 0 containers: []
	W1222 00:30:01.052829 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:30:01.052837 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:30:01.052908 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:30:01.090908 1446402 cri.go:96] found id: ""
	I1222 00:30:01.090924 1446402 logs.go:282] 0 containers: []
	W1222 00:30:01.090942 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:30:01.090949 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:30:01.091024 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:30:01.135566 1446402 cri.go:96] found id: ""
	I1222 00:30:01.135584 1446402 logs.go:282] 0 containers: []
	W1222 00:30:01.135592 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:30:01.135599 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:30:01.135681 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:30:01.183704 1446402 cri.go:96] found id: ""
	I1222 00:30:01.183720 1446402 logs.go:282] 0 containers: []
	W1222 00:30:01.183728 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:30:01.183734 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:30:01.183803 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:30:01.237284 1446402 cri.go:96] found id: ""
	I1222 00:30:01.237300 1446402 logs.go:282] 0 containers: []
	W1222 00:30:01.237315 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:30:01.237321 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:30:01.237397 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:30:01.274702 1446402 cri.go:96] found id: ""
	I1222 00:30:01.274719 1446402 logs.go:282] 0 containers: []
	W1222 00:30:01.274727 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:30:01.274735 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:30:01.274746 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:30:01.337817 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:30:01.337838 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:30:01.357916 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:30:01.357936 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:30:01.439644 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:30:01.426011   11434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:01.426767   11434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:01.428959   11434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:01.433129   11434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:01.433823   11434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:30:01.426011   11434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:01.426767   11434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:01.428959   11434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:01.433129   11434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:01.433823   11434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:30:01.439657 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:30:01.439672 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:30:01.506150 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:30:01.506173 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:30:04.047348 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:30:04.057922 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:30:04.057990 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:30:04.083599 1446402 cri.go:96] found id: ""
	I1222 00:30:04.083613 1446402 logs.go:282] 0 containers: []
	W1222 00:30:04.083620 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:30:04.083625 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:30:04.083697 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:30:04.109159 1446402 cri.go:96] found id: ""
	I1222 00:30:04.109174 1446402 logs.go:282] 0 containers: []
	W1222 00:30:04.109181 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:30:04.109186 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:30:04.109245 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:30:04.138314 1446402 cri.go:96] found id: ""
	I1222 00:30:04.138329 1446402 logs.go:282] 0 containers: []
	W1222 00:30:04.138336 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:30:04.138344 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:30:04.138405 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:30:04.164036 1446402 cri.go:96] found id: ""
	I1222 00:30:04.164051 1446402 logs.go:282] 0 containers: []
	W1222 00:30:04.164058 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:30:04.164078 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:30:04.164143 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:30:04.189566 1446402 cri.go:96] found id: ""
	I1222 00:30:04.189581 1446402 logs.go:282] 0 containers: []
	W1222 00:30:04.189588 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:30:04.189593 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:30:04.189657 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:30:04.214647 1446402 cri.go:96] found id: ""
	I1222 00:30:04.214662 1446402 logs.go:282] 0 containers: []
	W1222 00:30:04.214669 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:30:04.214675 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:30:04.214746 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:30:04.243657 1446402 cri.go:96] found id: ""
	I1222 00:30:04.243672 1446402 logs.go:282] 0 containers: []
	W1222 00:30:04.243680 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:30:04.243687 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:30:04.243700 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:30:04.312395 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:30:04.312414 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:30:04.342163 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:30:04.342181 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:30:04.399936 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:30:04.399958 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:30:04.416847 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:30:04.416863 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:30:04.482794 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:30:04.473925   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:04.474511   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:04.476052   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:04.476651   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:04.478320   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:30:04.473925   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:04.474511   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:04.476052   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:04.476651   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:04.478320   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:30:06.983066 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:30:06.993652 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:30:06.993715 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:30:07.023165 1446402 cri.go:96] found id: ""
	I1222 00:30:07.023180 1446402 logs.go:282] 0 containers: []
	W1222 00:30:07.023187 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:30:07.023192 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:30:07.023255 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:30:07.049538 1446402 cri.go:96] found id: ""
	I1222 00:30:07.049552 1446402 logs.go:282] 0 containers: []
	W1222 00:30:07.049560 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:30:07.049565 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:30:07.049629 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:30:07.075257 1446402 cri.go:96] found id: ""
	I1222 00:30:07.075277 1446402 logs.go:282] 0 containers: []
	W1222 00:30:07.075284 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:30:07.075289 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:30:07.075351 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:30:07.101441 1446402 cri.go:96] found id: ""
	I1222 00:30:07.101456 1446402 logs.go:282] 0 containers: []
	W1222 00:30:07.101463 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:30:07.101469 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:30:07.101532 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:30:07.128366 1446402 cri.go:96] found id: ""
	I1222 00:30:07.128380 1446402 logs.go:282] 0 containers: []
	W1222 00:30:07.128392 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:30:07.128398 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:30:07.128460 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:30:07.152988 1446402 cri.go:96] found id: ""
	I1222 00:30:07.153005 1446402 logs.go:282] 0 containers: []
	W1222 00:30:07.153013 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:30:07.153019 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:30:07.153079 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:30:07.178387 1446402 cri.go:96] found id: ""
	I1222 00:30:07.178401 1446402 logs.go:282] 0 containers: []
	W1222 00:30:07.178409 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:30:07.178428 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:30:07.178440 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:30:07.194549 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:30:07.194566 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:30:07.271952 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:30:07.261354   11636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:07.262317   11636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:07.264582   11636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:07.265158   11636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:07.267664   11636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:30:07.261354   11636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:07.262317   11636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:07.264582   11636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:07.265158   11636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:07.267664   11636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:30:07.271961 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:30:07.271973 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:30:07.346114 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:30:07.346134 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:30:07.373577 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:30:07.373593 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:30:09.930306 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:30:09.940949 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:30:09.941017 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:30:09.968763 1446402 cri.go:96] found id: ""
	I1222 00:30:09.968777 1446402 logs.go:282] 0 containers: []
	W1222 00:30:09.968784 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:30:09.968789 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:30:09.968848 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:30:09.992991 1446402 cri.go:96] found id: ""
	I1222 00:30:09.993006 1446402 logs.go:282] 0 containers: []
	W1222 00:30:09.993013 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:30:09.993018 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:30:09.993082 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:30:10.029788 1446402 cri.go:96] found id: ""
	I1222 00:30:10.029804 1446402 logs.go:282] 0 containers: []
	W1222 00:30:10.029811 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:30:10.029817 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:30:10.029886 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:30:10.067395 1446402 cri.go:96] found id: ""
	I1222 00:30:10.067410 1446402 logs.go:282] 0 containers: []
	W1222 00:30:10.067416 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:30:10.067422 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:30:10.067499 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:30:10.095007 1446402 cri.go:96] found id: ""
	I1222 00:30:10.095022 1446402 logs.go:282] 0 containers: []
	W1222 00:30:10.095030 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:30:10.095036 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:30:10.095101 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:30:10.123474 1446402 cri.go:96] found id: ""
	I1222 00:30:10.123495 1446402 logs.go:282] 0 containers: []
	W1222 00:30:10.123503 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:30:10.123509 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:30:10.123573 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:30:10.153420 1446402 cri.go:96] found id: ""
	I1222 00:30:10.153435 1446402 logs.go:282] 0 containers: []
	W1222 00:30:10.153441 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:30:10.153448 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:30:10.153459 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:30:10.210172 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:30:10.210193 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:30:10.226706 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:30:10.226725 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:30:10.315292 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:30:10.305028   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:10.306157   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:10.307886   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:10.308616   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:10.310308   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:30:10.305028   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:10.306157   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:10.307886   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:10.308616   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:10.310308   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:30:10.315303 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:30:10.315313 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:30:10.383703 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:30:10.383725 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:30:12.913638 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:30:12.925302 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:30:12.925369 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:30:12.950905 1446402 cri.go:96] found id: ""
	I1222 00:30:12.950919 1446402 logs.go:282] 0 containers: []
	W1222 00:30:12.950930 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:30:12.950935 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:30:12.950996 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:30:12.975557 1446402 cri.go:96] found id: ""
	I1222 00:30:12.975587 1446402 logs.go:282] 0 containers: []
	W1222 00:30:12.975596 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:30:12.975609 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:30:12.975679 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:30:13.000143 1446402 cri.go:96] found id: ""
	I1222 00:30:13.000157 1446402 logs.go:282] 0 containers: []
	W1222 00:30:13.000165 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:30:13.000171 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:30:13.000234 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:30:13.026672 1446402 cri.go:96] found id: ""
	I1222 00:30:13.026694 1446402 logs.go:282] 0 containers: []
	W1222 00:30:13.026702 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:30:13.026709 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:30:13.026773 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:30:13.055830 1446402 cri.go:96] found id: ""
	I1222 00:30:13.055846 1446402 logs.go:282] 0 containers: []
	W1222 00:30:13.055854 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:30:13.055859 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:30:13.055923 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:30:13.082359 1446402 cri.go:96] found id: ""
	I1222 00:30:13.082374 1446402 logs.go:282] 0 containers: []
	W1222 00:30:13.082382 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:30:13.082387 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:30:13.082449 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:30:13.108828 1446402 cri.go:96] found id: ""
	I1222 00:30:13.108842 1446402 logs.go:282] 0 containers: []
	W1222 00:30:13.108850 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:30:13.108858 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:30:13.108869 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:30:13.165350 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:30:13.165373 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:30:13.181480 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:30:13.181497 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:30:13.246107 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:30:13.236679   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:13.237365   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:13.239274   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:13.239720   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:13.241214   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:30:13.236679   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:13.237365   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:13.239274   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:13.239720   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:13.241214   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:30:13.246118 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:30:13.246128 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:30:13.320470 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:30:13.320490 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:30:15.851791 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:30:15.862330 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:30:15.862391 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:30:15.890336 1446402 cri.go:96] found id: ""
	I1222 00:30:15.890350 1446402 logs.go:282] 0 containers: []
	W1222 00:30:15.890358 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:30:15.890364 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:30:15.890428 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:30:15.917647 1446402 cri.go:96] found id: ""
	I1222 00:30:15.917662 1446402 logs.go:282] 0 containers: []
	W1222 00:30:15.917670 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:30:15.917675 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:30:15.917737 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:30:15.948052 1446402 cri.go:96] found id: ""
	I1222 00:30:15.948074 1446402 logs.go:282] 0 containers: []
	W1222 00:30:15.948083 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:30:15.948089 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:30:15.948155 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:30:15.973080 1446402 cri.go:96] found id: ""
	I1222 00:30:15.973094 1446402 logs.go:282] 0 containers: []
	W1222 00:30:15.973101 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:30:15.973107 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:30:15.973167 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:30:15.998935 1446402 cri.go:96] found id: ""
	I1222 00:30:15.998950 1446402 logs.go:282] 0 containers: []
	W1222 00:30:15.998957 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:30:15.998962 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:30:15.999025 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:30:16.027611 1446402 cri.go:96] found id: ""
	I1222 00:30:16.027628 1446402 logs.go:282] 0 containers: []
	W1222 00:30:16.027638 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:30:16.027644 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:30:16.027727 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:30:16.053780 1446402 cri.go:96] found id: ""
	I1222 00:30:16.053794 1446402 logs.go:282] 0 containers: []
	W1222 00:30:16.053802 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:30:16.053809 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:30:16.053823 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:30:16.124007 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:30:16.115168   11946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:16.115642   11946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:16.117413   11946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:16.118187   11946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:16.119870   11946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:30:16.115168   11946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:16.115642   11946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:16.117413   11946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:16.118187   11946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:16.119870   11946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:30:16.124030 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:30:16.124042 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:30:16.186716 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:30:16.186736 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:30:16.216494 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:30:16.216511 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:30:16.279107 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:30:16.279127 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:30:18.798677 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:30:18.809493 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:30:18.809564 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:30:18.835308 1446402 cri.go:96] found id: ""
	I1222 00:30:18.835323 1446402 logs.go:282] 0 containers: []
	W1222 00:30:18.835337 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:30:18.835344 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:30:18.835408 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:30:18.861968 1446402 cri.go:96] found id: ""
	I1222 00:30:18.861982 1446402 logs.go:282] 0 containers: []
	W1222 00:30:18.861989 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:30:18.861995 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:30:18.862052 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:30:18.887230 1446402 cri.go:96] found id: ""
	I1222 00:30:18.887243 1446402 logs.go:282] 0 containers: []
	W1222 00:30:18.887250 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:30:18.887256 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:30:18.887313 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:30:18.912928 1446402 cri.go:96] found id: ""
	I1222 00:30:18.912942 1446402 logs.go:282] 0 containers: []
	W1222 00:30:18.912949 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:30:18.912954 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:30:18.913016 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:30:18.939487 1446402 cri.go:96] found id: ""
	I1222 00:30:18.939501 1446402 logs.go:282] 0 containers: []
	W1222 00:30:18.939509 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:30:18.939514 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:30:18.939578 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:30:18.973342 1446402 cri.go:96] found id: ""
	I1222 00:30:18.973356 1446402 logs.go:282] 0 containers: []
	W1222 00:30:18.973364 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:30:18.973369 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:30:18.973428 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:30:18.997889 1446402 cri.go:96] found id: ""
	I1222 00:30:18.997913 1446402 logs.go:282] 0 containers: []
	W1222 00:30:18.997920 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:30:18.997927 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:30:18.997938 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:30:19.055572 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:30:19.055591 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:30:19.072427 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:30:19.072443 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:30:19.139616 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:30:19.130540   12061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:19.131330   12061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:19.133029   12061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:19.133650   12061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:19.135400   12061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:30:19.130540   12061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:19.131330   12061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:19.133029   12061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:19.133650   12061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:19.135400   12061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:30:19.139628 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:30:19.139638 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:30:19.202678 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:30:19.202697 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:30:21.731757 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:30:21.742262 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:30:21.742322 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:30:21.768714 1446402 cri.go:96] found id: ""
	I1222 00:30:21.768728 1446402 logs.go:282] 0 containers: []
	W1222 00:30:21.768736 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:30:21.768741 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:30:21.768804 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:30:21.799253 1446402 cri.go:96] found id: ""
	I1222 00:30:21.799269 1446402 logs.go:282] 0 containers: []
	W1222 00:30:21.799276 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:30:21.799283 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:30:21.799344 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:30:21.824941 1446402 cri.go:96] found id: ""
	I1222 00:30:21.824963 1446402 logs.go:282] 0 containers: []
	W1222 00:30:21.824970 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:30:21.824975 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:30:21.825035 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:30:21.850741 1446402 cri.go:96] found id: ""
	I1222 00:30:21.850755 1446402 logs.go:282] 0 containers: []
	W1222 00:30:21.850762 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:30:21.850767 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:30:21.850829 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:30:21.876572 1446402 cri.go:96] found id: ""
	I1222 00:30:21.876587 1446402 logs.go:282] 0 containers: []
	W1222 00:30:21.876595 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:30:21.876600 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:30:21.876660 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:30:21.902799 1446402 cri.go:96] found id: ""
	I1222 00:30:21.902814 1446402 logs.go:282] 0 containers: []
	W1222 00:30:21.902821 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:30:21.902827 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:30:21.902888 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:30:21.928559 1446402 cri.go:96] found id: ""
	I1222 00:30:21.928573 1446402 logs.go:282] 0 containers: []
	W1222 00:30:21.928580 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:30:21.928587 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:30:21.928597 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:30:21.984144 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:30:21.984164 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:30:22.000384 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:30:22.000402 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:30:22.073778 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:30:22.064391   12166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:22.065145   12166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:22.066873   12166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:22.067425   12166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:22.069099   12166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:30:22.064391   12166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:22.065145   12166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:22.066873   12166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:22.067425   12166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:22.069099   12166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:30:22.073791 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:30:22.073804 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:30:22.146346 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:30:22.146377 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:30:24.676106 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:30:24.687741 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:30:24.687862 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:30:24.714182 1446402 cri.go:96] found id: ""
	I1222 00:30:24.714204 1446402 logs.go:282] 0 containers: []
	W1222 00:30:24.714212 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:30:24.714217 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:30:24.714281 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:30:24.740930 1446402 cri.go:96] found id: ""
	I1222 00:30:24.740944 1446402 logs.go:282] 0 containers: []
	W1222 00:30:24.740951 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:30:24.740957 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:30:24.741018 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:30:24.767599 1446402 cri.go:96] found id: ""
	I1222 00:30:24.767613 1446402 logs.go:282] 0 containers: []
	W1222 00:30:24.767621 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:30:24.767626 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:30:24.767685 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:30:24.792739 1446402 cri.go:96] found id: ""
	I1222 00:30:24.792753 1446402 logs.go:282] 0 containers: []
	W1222 00:30:24.792760 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:30:24.792766 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:30:24.792827 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:30:24.816926 1446402 cri.go:96] found id: ""
	I1222 00:30:24.816940 1446402 logs.go:282] 0 containers: []
	W1222 00:30:24.816948 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:30:24.816953 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:30:24.817012 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:30:24.842765 1446402 cri.go:96] found id: ""
	I1222 00:30:24.842780 1446402 logs.go:282] 0 containers: []
	W1222 00:30:24.842788 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:30:24.842794 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:30:24.842872 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:30:24.869078 1446402 cri.go:96] found id: ""
	I1222 00:30:24.869092 1446402 logs.go:282] 0 containers: []
	W1222 00:30:24.869099 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:30:24.869108 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:30:24.869119 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:30:24.903296 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:30:24.903312 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:30:24.961056 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:30:24.961075 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:30:24.976812 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:30:24.976828 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:30:25.069840 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:30:25.045522   12282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:25.046327   12282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:25.048398   12282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:25.049159   12282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:25.062238   12282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:30:25.045522   12282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:25.046327   12282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:25.048398   12282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:25.049159   12282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:25.062238   12282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:30:25.069853 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:30:25.069866 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:30:27.636563 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:30:27.647100 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:30:27.647166 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:30:27.672723 1446402 cri.go:96] found id: ""
	I1222 00:30:27.672737 1446402 logs.go:282] 0 containers: []
	W1222 00:30:27.672745 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:30:27.672750 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:30:27.672813 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:30:27.702441 1446402 cri.go:96] found id: ""
	I1222 00:30:27.702455 1446402 logs.go:282] 0 containers: []
	W1222 00:30:27.702462 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:30:27.702468 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:30:27.702530 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:30:27.731422 1446402 cri.go:96] found id: ""
	I1222 00:30:27.731436 1446402 logs.go:282] 0 containers: []
	W1222 00:30:27.731443 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:30:27.731448 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:30:27.731509 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:30:27.756265 1446402 cri.go:96] found id: ""
	I1222 00:30:27.756279 1446402 logs.go:282] 0 containers: []
	W1222 00:30:27.756287 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:30:27.756292 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:30:27.756354 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:30:27.779774 1446402 cri.go:96] found id: ""
	I1222 00:30:27.779791 1446402 logs.go:282] 0 containers: []
	W1222 00:30:27.779798 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:30:27.779804 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:30:27.779867 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:30:27.805305 1446402 cri.go:96] found id: ""
	I1222 00:30:27.805320 1446402 logs.go:282] 0 containers: []
	W1222 00:30:27.805327 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:30:27.805333 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:30:27.805396 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:30:27.835772 1446402 cri.go:96] found id: ""
	I1222 00:30:27.835786 1446402 logs.go:282] 0 containers: []
	W1222 00:30:27.835794 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:30:27.835802 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:30:27.835813 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:30:27.851527 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:30:27.851543 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:30:27.917867 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:30:27.909398   12374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:27.909941   12374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:27.911438   12374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:27.911805   12374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:27.913341   12374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:30:27.909398   12374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:27.909941   12374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:27.911438   12374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:27.911805   12374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:27.913341   12374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:30:27.917877 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:30:27.917889 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:30:27.981255 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:30:27.981274 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:30:28.012714 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:30:28.012732 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:30:30.570668 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:30:30.581032 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:30:30.581096 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:30:30.605788 1446402 cri.go:96] found id: ""
	I1222 00:30:30.605801 1446402 logs.go:282] 0 containers: []
	W1222 00:30:30.605809 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:30:30.605816 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:30:30.605878 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:30:30.630263 1446402 cri.go:96] found id: ""
	I1222 00:30:30.630277 1446402 logs.go:282] 0 containers: []
	W1222 00:30:30.630284 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:30:30.630289 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:30:30.630348 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:30:30.655578 1446402 cri.go:96] found id: ""
	I1222 00:30:30.655593 1446402 logs.go:282] 0 containers: []
	W1222 00:30:30.655600 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:30:30.655608 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:30:30.655668 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:30:30.680304 1446402 cri.go:96] found id: ""
	I1222 00:30:30.680319 1446402 logs.go:282] 0 containers: []
	W1222 00:30:30.680326 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:30:30.680332 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:30:30.680390 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:30:30.706799 1446402 cri.go:96] found id: ""
	I1222 00:30:30.706812 1446402 logs.go:282] 0 containers: []
	W1222 00:30:30.706819 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:30:30.706826 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:30:30.706888 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:30:30.732009 1446402 cri.go:96] found id: ""
	I1222 00:30:30.732023 1446402 logs.go:282] 0 containers: []
	W1222 00:30:30.732030 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:30:30.732036 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:30:30.732145 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:30:30.758260 1446402 cri.go:96] found id: ""
	I1222 00:30:30.758274 1446402 logs.go:282] 0 containers: []
	W1222 00:30:30.758282 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:30:30.758289 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:30:30.758302 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:30:30.773937 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:30:30.773955 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:30:30.836710 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:30:30.828393   12483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:30.829211   12483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:30.830850   12483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:30.831199   12483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:30.832774   12483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:30:30.828393   12483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:30.829211   12483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:30.830850   12483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:30.831199   12483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:30.832774   12483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:30:30.836720 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:30:30.836734 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:30:30.898609 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:30:30.898629 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:30:30.926987 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:30:30.927002 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:30:33.488514 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:30:33.500859 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:30:33.500936 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:30:33.534647 1446402 cri.go:96] found id: ""
	I1222 00:30:33.534662 1446402 logs.go:282] 0 containers: []
	W1222 00:30:33.534669 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:30:33.534675 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:30:33.534740 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:30:33.567528 1446402 cri.go:96] found id: ""
	I1222 00:30:33.567542 1446402 logs.go:282] 0 containers: []
	W1222 00:30:33.567550 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:30:33.567556 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:30:33.567619 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:30:33.592756 1446402 cri.go:96] found id: ""
	I1222 00:30:33.592770 1446402 logs.go:282] 0 containers: []
	W1222 00:30:33.592777 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:30:33.592783 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:30:33.592843 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:30:33.618141 1446402 cri.go:96] found id: ""
	I1222 00:30:33.618155 1446402 logs.go:282] 0 containers: []
	W1222 00:30:33.618162 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:30:33.618169 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:30:33.618229 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:30:33.643676 1446402 cri.go:96] found id: ""
	I1222 00:30:33.643690 1446402 logs.go:282] 0 containers: []
	W1222 00:30:33.643697 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:30:33.643702 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:30:33.643766 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:30:33.675007 1446402 cri.go:96] found id: ""
	I1222 00:30:33.675022 1446402 logs.go:282] 0 containers: []
	W1222 00:30:33.675029 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:30:33.675035 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:30:33.675096 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:30:33.701088 1446402 cri.go:96] found id: ""
	I1222 00:30:33.701104 1446402 logs.go:282] 0 containers: []
	W1222 00:30:33.701112 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:30:33.701119 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:30:33.701130 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:30:33.757879 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:30:33.757898 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:30:33.773857 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:30:33.773873 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:30:33.838724 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:30:33.830022   12590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:33.830661   12590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:33.832486   12590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:33.833021   12590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:33.834672   12590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:30:33.830022   12590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:33.830661   12590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:33.832486   12590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:33.833021   12590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:33.834672   12590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:30:33.838735 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:30:33.838745 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:30:33.901316 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:30:33.901336 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:30:36.433582 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:30:36.443819 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:30:36.443881 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:30:36.467506 1446402 cri.go:96] found id: ""
	I1222 00:30:36.467521 1446402 logs.go:282] 0 containers: []
	W1222 00:30:36.467528 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:30:36.467534 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:30:36.467596 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:30:36.502511 1446402 cri.go:96] found id: ""
	I1222 00:30:36.502525 1446402 logs.go:282] 0 containers: []
	W1222 00:30:36.502532 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:30:36.502538 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:30:36.502596 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:30:36.528768 1446402 cri.go:96] found id: ""
	I1222 00:30:36.528782 1446402 logs.go:282] 0 containers: []
	W1222 00:30:36.528789 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:30:36.528795 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:30:36.528856 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:30:36.563520 1446402 cri.go:96] found id: ""
	I1222 00:30:36.563534 1446402 logs.go:282] 0 containers: []
	W1222 00:30:36.563552 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:30:36.563558 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:30:36.563625 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:30:36.587776 1446402 cri.go:96] found id: ""
	I1222 00:30:36.587791 1446402 logs.go:282] 0 containers: []
	W1222 00:30:36.587798 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:30:36.587803 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:30:36.587870 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:30:36.613760 1446402 cri.go:96] found id: ""
	I1222 00:30:36.613774 1446402 logs.go:282] 0 containers: []
	W1222 00:30:36.613781 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:30:36.613786 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:30:36.613846 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:30:36.638515 1446402 cri.go:96] found id: ""
	I1222 00:30:36.638529 1446402 logs.go:282] 0 containers: []
	W1222 00:30:36.638536 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:30:36.638544 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:30:36.638554 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:30:36.697219 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:30:36.697239 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:30:36.713436 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:30:36.713452 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:30:36.780368 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:30:36.772229   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:36.773020   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:36.774640   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:36.774973   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:36.776479   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:30:36.772229   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:36.773020   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:36.774640   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:36.774973   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:36.776479   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:30:36.780381 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:30:36.780393 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:30:36.842888 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:30:36.842908 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:30:39.372135 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:30:39.382719 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:30:39.382781 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:30:39.408981 1446402 cri.go:96] found id: ""
	I1222 00:30:39.408994 1446402 logs.go:282] 0 containers: []
	W1222 00:30:39.409002 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:30:39.409007 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:30:39.409066 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:30:39.442559 1446402 cri.go:96] found id: ""
	I1222 00:30:39.442573 1446402 logs.go:282] 0 containers: []
	W1222 00:30:39.442581 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:30:39.442586 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:30:39.442643 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:30:39.468577 1446402 cri.go:96] found id: ""
	I1222 00:30:39.468591 1446402 logs.go:282] 0 containers: []
	W1222 00:30:39.468598 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:30:39.468603 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:30:39.468660 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:30:39.510316 1446402 cri.go:96] found id: ""
	I1222 00:30:39.510331 1446402 logs.go:282] 0 containers: []
	W1222 00:30:39.510339 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:30:39.510345 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:30:39.510407 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:30:39.540511 1446402 cri.go:96] found id: ""
	I1222 00:30:39.540526 1446402 logs.go:282] 0 containers: []
	W1222 00:30:39.540538 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:30:39.540544 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:30:39.540607 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:30:39.567225 1446402 cri.go:96] found id: ""
	I1222 00:30:39.567239 1446402 logs.go:282] 0 containers: []
	W1222 00:30:39.567246 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:30:39.567251 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:30:39.567313 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:30:39.592091 1446402 cri.go:96] found id: ""
	I1222 00:30:39.592105 1446402 logs.go:282] 0 containers: []
	W1222 00:30:39.592112 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:30:39.592119 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:30:39.592130 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:30:39.622343 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:30:39.622362 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:30:39.679425 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:30:39.679444 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:30:39.696213 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:30:39.696230 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:30:39.769659 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:30:39.760868   12812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:39.761671   12812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:39.763320   12812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:39.763916   12812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:39.765513   12812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:30:39.760868   12812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:39.761671   12812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:39.763320   12812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:39.763916   12812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:39.765513   12812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:30:39.769670 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:30:39.769680 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:30:42.336173 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:30:42.346558 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:30:42.346621 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:30:42.370787 1446402 cri.go:96] found id: ""
	I1222 00:30:42.370802 1446402 logs.go:282] 0 containers: []
	W1222 00:30:42.370810 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:30:42.370816 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:30:42.370877 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:30:42.395960 1446402 cri.go:96] found id: ""
	I1222 00:30:42.395973 1446402 logs.go:282] 0 containers: []
	W1222 00:30:42.395980 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:30:42.395985 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:30:42.396044 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:30:42.421477 1446402 cri.go:96] found id: ""
	I1222 00:30:42.421491 1446402 logs.go:282] 0 containers: []
	W1222 00:30:42.421498 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:30:42.421504 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:30:42.421564 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:30:42.446555 1446402 cri.go:96] found id: ""
	I1222 00:30:42.446569 1446402 logs.go:282] 0 containers: []
	W1222 00:30:42.446577 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:30:42.446582 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:30:42.446642 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:30:42.472081 1446402 cri.go:96] found id: ""
	I1222 00:30:42.472098 1446402 logs.go:282] 0 containers: []
	W1222 00:30:42.472105 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:30:42.472110 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:30:42.472169 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:30:42.511362 1446402 cri.go:96] found id: ""
	I1222 00:30:42.511375 1446402 logs.go:282] 0 containers: []
	W1222 00:30:42.511382 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:30:42.511388 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:30:42.511447 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:30:42.547512 1446402 cri.go:96] found id: ""
	I1222 00:30:42.547527 1446402 logs.go:282] 0 containers: []
	W1222 00:30:42.547533 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:30:42.547541 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:30:42.547551 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:30:42.615776 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:30:42.615799 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:30:42.646130 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:30:42.646146 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:30:42.705658 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:30:42.705677 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:30:42.721590 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:30:42.721610 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:30:42.787813 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:30:42.778324   12913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:42.779074   12913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:42.780775   12913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:42.781111   12913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:42.782752   12913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:30:42.778324   12913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:42.779074   12913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:42.780775   12913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:42.781111   12913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:42.782752   12913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:30:45.288531 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:30:45.303331 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:30:45.303401 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:30:45.338450 1446402 cri.go:96] found id: ""
	I1222 00:30:45.338484 1446402 logs.go:282] 0 containers: []
	W1222 00:30:45.338492 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:30:45.338499 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:30:45.338571 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:30:45.365473 1446402 cri.go:96] found id: ""
	I1222 00:30:45.365487 1446402 logs.go:282] 0 containers: []
	W1222 00:30:45.365494 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:30:45.365500 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:30:45.365561 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:30:45.390271 1446402 cri.go:96] found id: ""
	I1222 00:30:45.390285 1446402 logs.go:282] 0 containers: []
	W1222 00:30:45.390292 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:30:45.390298 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:30:45.390357 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:30:45.414377 1446402 cri.go:96] found id: ""
	I1222 00:30:45.414391 1446402 logs.go:282] 0 containers: []
	W1222 00:30:45.414398 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:30:45.414405 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:30:45.414465 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:30:45.443708 1446402 cri.go:96] found id: ""
	I1222 00:30:45.443722 1446402 logs.go:282] 0 containers: []
	W1222 00:30:45.443729 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:30:45.443735 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:30:45.443800 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:30:45.469111 1446402 cri.go:96] found id: ""
	I1222 00:30:45.469126 1446402 logs.go:282] 0 containers: []
	W1222 00:30:45.469133 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:30:45.469138 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:30:45.469199 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:30:45.506648 1446402 cri.go:96] found id: ""
	I1222 00:30:45.506662 1446402 logs.go:282] 0 containers: []
	W1222 00:30:45.506670 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:30:45.506678 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:30:45.506688 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:30:45.570224 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:30:45.570244 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:30:45.587665 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:30:45.587682 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:30:45.658642 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:30:45.649729   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:45.650417   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:45.652145   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:45.652819   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:45.654615   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:30:45.649729   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:45.650417   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:45.652145   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:45.652819   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:45.654615   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:30:45.658668 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:30:45.658680 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:30:45.726278 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:30:45.726296 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:30:48.258377 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:30:48.269041 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:30:48.269106 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:30:48.296090 1446402 cri.go:96] found id: ""
	I1222 00:30:48.296110 1446402 logs.go:282] 0 containers: []
	W1222 00:30:48.296118 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:30:48.296124 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:30:48.296189 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:30:48.324810 1446402 cri.go:96] found id: ""
	I1222 00:30:48.324824 1446402 logs.go:282] 0 containers: []
	W1222 00:30:48.324838 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:30:48.324844 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:30:48.324907 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:30:48.355386 1446402 cri.go:96] found id: ""
	I1222 00:30:48.355401 1446402 logs.go:282] 0 containers: []
	W1222 00:30:48.355408 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:30:48.355413 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:30:48.355478 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:30:48.382715 1446402 cri.go:96] found id: ""
	I1222 00:30:48.382738 1446402 logs.go:282] 0 containers: []
	W1222 00:30:48.382746 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:30:48.382752 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:30:48.382829 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:30:48.408554 1446402 cri.go:96] found id: ""
	I1222 00:30:48.408567 1446402 logs.go:282] 0 containers: []
	W1222 00:30:48.408574 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:30:48.408580 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:30:48.408643 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:30:48.434270 1446402 cri.go:96] found id: ""
	I1222 00:30:48.434293 1446402 logs.go:282] 0 containers: []
	W1222 00:30:48.434300 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:30:48.434306 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:30:48.434374 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:30:48.459881 1446402 cri.go:96] found id: ""
	I1222 00:30:48.459895 1446402 logs.go:282] 0 containers: []
	W1222 00:30:48.459903 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:30:48.459911 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:30:48.459921 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:30:48.517466 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:30:48.517484 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:30:48.537053 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:30:48.537070 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:30:48.604854 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:30:48.595608   13115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:48.596514   13115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:48.598305   13115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:48.599000   13115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:48.600577   13115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:30:48.595608   13115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:48.596514   13115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:48.598305   13115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:48.599000   13115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:48.600577   13115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:30:48.604864 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:30:48.604874 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:30:48.671361 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:30:48.671387 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:30:51.200853 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:30:51.211776 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:30:51.211839 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:30:51.238170 1446402 cri.go:96] found id: ""
	I1222 00:30:51.238186 1446402 logs.go:282] 0 containers: []
	W1222 00:30:51.238194 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:30:51.238199 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:30:51.238268 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:30:51.269105 1446402 cri.go:96] found id: ""
	I1222 00:30:51.269134 1446402 logs.go:282] 0 containers: []
	W1222 00:30:51.269142 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:30:51.269148 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:30:51.269219 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:30:51.293434 1446402 cri.go:96] found id: ""
	I1222 00:30:51.293457 1446402 logs.go:282] 0 containers: []
	W1222 00:30:51.293464 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:30:51.293470 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:30:51.293541 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:30:51.319040 1446402 cri.go:96] found id: ""
	I1222 00:30:51.319055 1446402 logs.go:282] 0 containers: []
	W1222 00:30:51.319062 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:30:51.319068 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:30:51.319130 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:30:51.348957 1446402 cri.go:96] found id: ""
	I1222 00:30:51.348974 1446402 logs.go:282] 0 containers: []
	W1222 00:30:51.348982 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:30:51.348987 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:30:51.349051 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:30:51.374220 1446402 cri.go:96] found id: ""
	I1222 00:30:51.374234 1446402 logs.go:282] 0 containers: []
	W1222 00:30:51.374242 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:30:51.374248 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:30:51.374308 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:30:51.399159 1446402 cri.go:96] found id: ""
	I1222 00:30:51.399173 1446402 logs.go:282] 0 containers: []
	W1222 00:30:51.399180 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:30:51.399188 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:30:51.399198 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:30:51.459029 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:30:51.459048 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:30:51.475298 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:30:51.475315 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:30:51.566963 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:30:51.557344   13215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:51.558248   13215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:51.560695   13215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:51.561017   13215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:51.562619   13215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:30:51.557344   13215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:51.558248   13215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:51.560695   13215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:51.561017   13215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:51.562619   13215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:30:51.566987 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:30:51.566997 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:30:51.629274 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:30:51.629295 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:30:54.157280 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:30:54.168037 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:30:54.168148 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:30:54.193307 1446402 cri.go:96] found id: ""
	I1222 00:30:54.193321 1446402 logs.go:282] 0 containers: []
	W1222 00:30:54.193328 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:30:54.193333 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:30:54.193396 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:30:54.219101 1446402 cri.go:96] found id: ""
	I1222 00:30:54.219115 1446402 logs.go:282] 0 containers: []
	W1222 00:30:54.219123 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:30:54.219128 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:30:54.219194 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:30:54.246374 1446402 cri.go:96] found id: ""
	I1222 00:30:54.246389 1446402 logs.go:282] 0 containers: []
	W1222 00:30:54.246396 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:30:54.246407 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:30:54.246465 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:30:54.271786 1446402 cri.go:96] found id: ""
	I1222 00:30:54.271801 1446402 logs.go:282] 0 containers: []
	W1222 00:30:54.271808 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:30:54.271813 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:30:54.271879 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:30:54.297101 1446402 cri.go:96] found id: ""
	I1222 00:30:54.297116 1446402 logs.go:282] 0 containers: []
	W1222 00:30:54.297123 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:30:54.297128 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:30:54.297187 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:30:54.321971 1446402 cri.go:96] found id: ""
	I1222 00:30:54.321984 1446402 logs.go:282] 0 containers: []
	W1222 00:30:54.321991 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:30:54.321997 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:30:54.322057 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:30:54.347313 1446402 cri.go:96] found id: ""
	I1222 00:30:54.347327 1446402 logs.go:282] 0 containers: []
	W1222 00:30:54.347334 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:30:54.347342 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:30:54.347353 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:30:54.403888 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:30:54.403909 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:30:54.419766 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:30:54.419782 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:30:54.484682 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:30:54.476870   13317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:54.477516   13317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:54.479088   13317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:54.479419   13317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:54.480904   13317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:30:54.476870   13317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:54.477516   13317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:54.479088   13317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:54.479419   13317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:54.480904   13317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:30:54.484693 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:30:54.484703 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:30:54.552360 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:30:54.552378 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:30:57.081711 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:30:57.092202 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:30:57.092266 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:30:57.117391 1446402 cri.go:96] found id: ""
	I1222 00:30:57.117405 1446402 logs.go:282] 0 containers: []
	W1222 00:30:57.117412 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:30:57.117419 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:30:57.117479 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:30:57.143247 1446402 cri.go:96] found id: ""
	I1222 00:30:57.143261 1446402 logs.go:282] 0 containers: []
	W1222 00:30:57.143269 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:30:57.143274 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:30:57.143336 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:30:57.167819 1446402 cri.go:96] found id: ""
	I1222 00:30:57.167833 1446402 logs.go:282] 0 containers: []
	W1222 00:30:57.167840 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:30:57.167845 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:30:57.167907 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:30:57.199021 1446402 cri.go:96] found id: ""
	I1222 00:30:57.199036 1446402 logs.go:282] 0 containers: []
	W1222 00:30:57.199043 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:30:57.199049 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:30:57.199108 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:30:57.222971 1446402 cri.go:96] found id: ""
	I1222 00:30:57.222986 1446402 logs.go:282] 0 containers: []
	W1222 00:30:57.222993 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:30:57.222999 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:30:57.223058 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:30:57.248778 1446402 cri.go:96] found id: ""
	I1222 00:30:57.248792 1446402 logs.go:282] 0 containers: []
	W1222 00:30:57.248800 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:30:57.248806 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:30:57.248865 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:30:57.274281 1446402 cri.go:96] found id: ""
	I1222 00:30:57.274294 1446402 logs.go:282] 0 containers: []
	W1222 00:30:57.274301 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:30:57.274309 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:30:57.274319 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:30:57.336861 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:30:57.336882 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:30:57.365636 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:30:57.365661 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:30:57.423967 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:30:57.423989 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:30:57.440127 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:30:57.440145 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:30:57.509798 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:30:57.500225   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:57.501062   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:57.503871   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:57.504255   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:57.505775   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:30:57.500225   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:57.501062   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:57.503871   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:57.504255   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:57.505775   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:31:00.010205 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:31:00.104650 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:31:00.104734 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:31:00.179982 1446402 cri.go:96] found id: ""
	I1222 00:31:00.180032 1446402 logs.go:282] 0 containers: []
	W1222 00:31:00.180041 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:31:00.180071 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:31:00.180239 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:31:00.284701 1446402 cri.go:96] found id: ""
	I1222 00:31:00.284717 1446402 logs.go:282] 0 containers: []
	W1222 00:31:00.284725 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:31:00.284731 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:31:00.284803 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:31:00.386635 1446402 cri.go:96] found id: ""
	I1222 00:31:00.386652 1446402 logs.go:282] 0 containers: []
	W1222 00:31:00.386659 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:31:00.386665 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:31:00.386735 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:31:00.427920 1446402 cri.go:96] found id: ""
	I1222 00:31:00.427944 1446402 logs.go:282] 0 containers: []
	W1222 00:31:00.427959 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:31:00.427966 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:31:00.428040 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:31:00.465116 1446402 cri.go:96] found id: ""
	I1222 00:31:00.465134 1446402 logs.go:282] 0 containers: []
	W1222 00:31:00.465144 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:31:00.465151 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:31:00.465232 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:31:00.499645 1446402 cri.go:96] found id: ""
	I1222 00:31:00.499660 1446402 logs.go:282] 0 containers: []
	W1222 00:31:00.499667 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:31:00.499673 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:31:00.499747 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:31:00.537565 1446402 cri.go:96] found id: ""
	I1222 00:31:00.537582 1446402 logs.go:282] 0 containers: []
	W1222 00:31:00.537595 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:31:00.537604 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:31:00.537615 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:31:00.575552 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:31:00.575567 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:31:00.633041 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:31:00.633063 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:31:00.649172 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:31:00.649187 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:31:00.724351 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:31:00.712002   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:00.712620   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:00.717220   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:00.718158   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:00.720021   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:31:00.712002   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:00.712620   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:00.717220   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:00.718158   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:00.720021   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:31:00.724361 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:31:00.724372 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:31:03.287306 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:31:03.298001 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:31:03.298072 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:31:03.324825 1446402 cri.go:96] found id: ""
	I1222 00:31:03.324840 1446402 logs.go:282] 0 containers: []
	W1222 00:31:03.324847 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:31:03.324859 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:31:03.324922 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:31:03.350917 1446402 cri.go:96] found id: ""
	I1222 00:31:03.350931 1446402 logs.go:282] 0 containers: []
	W1222 00:31:03.350939 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:31:03.350944 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:31:03.351006 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:31:03.379670 1446402 cri.go:96] found id: ""
	I1222 00:31:03.379685 1446402 logs.go:282] 0 containers: []
	W1222 00:31:03.379692 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:31:03.379697 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:31:03.379757 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:31:03.404478 1446402 cri.go:96] found id: ""
	I1222 00:31:03.404492 1446402 logs.go:282] 0 containers: []
	W1222 00:31:03.404499 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:31:03.404505 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:31:03.404566 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:31:03.433469 1446402 cri.go:96] found id: ""
	I1222 00:31:03.433483 1446402 logs.go:282] 0 containers: []
	W1222 00:31:03.433491 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:31:03.433496 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:31:03.433559 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:31:03.458710 1446402 cri.go:96] found id: ""
	I1222 00:31:03.458724 1446402 logs.go:282] 0 containers: []
	W1222 00:31:03.458731 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:31:03.458737 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:31:03.458798 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:31:03.489628 1446402 cri.go:96] found id: ""
	I1222 00:31:03.489641 1446402 logs.go:282] 0 containers: []
	W1222 00:31:03.489648 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:31:03.489656 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:31:03.489666 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:31:03.561791 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:31:03.561811 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:31:03.591660 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:31:03.591676 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:31:03.649546 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:31:03.649564 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:31:03.665699 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:31:03.665717 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:31:03.732939 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:31:03.723537   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:03.725016   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:03.725617   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:03.726715   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:03.727036   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:31:03.723537   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:03.725016   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:03.725617   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:03.726715   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:03.727036   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:31:06.234625 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:31:06.245401 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:31:06.245464 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:31:06.272079 1446402 cri.go:96] found id: ""
	I1222 00:31:06.272093 1446402 logs.go:282] 0 containers: []
	W1222 00:31:06.272100 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:31:06.272105 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:31:06.272166 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:31:06.297857 1446402 cri.go:96] found id: ""
	I1222 00:31:06.297871 1446402 logs.go:282] 0 containers: []
	W1222 00:31:06.297881 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:31:06.297886 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:31:06.297947 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:31:06.323563 1446402 cri.go:96] found id: ""
	I1222 00:31:06.323578 1446402 logs.go:282] 0 containers: []
	W1222 00:31:06.323585 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:31:06.323591 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:31:06.323654 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:31:06.352113 1446402 cri.go:96] found id: ""
	I1222 00:31:06.352128 1446402 logs.go:282] 0 containers: []
	W1222 00:31:06.352135 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:31:06.352140 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:31:06.352201 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:31:06.383883 1446402 cri.go:96] found id: ""
	I1222 00:31:06.383897 1446402 logs.go:282] 0 containers: []
	W1222 00:31:06.383906 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:31:06.383911 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:31:06.383980 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:31:06.410293 1446402 cri.go:96] found id: ""
	I1222 00:31:06.410307 1446402 logs.go:282] 0 containers: []
	W1222 00:31:06.410314 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:31:06.410319 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:31:06.410379 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:31:06.436428 1446402 cri.go:96] found id: ""
	I1222 00:31:06.436442 1446402 logs.go:282] 0 containers: []
	W1222 00:31:06.436449 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:31:06.436457 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:31:06.436467 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:31:06.493371 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:31:06.493391 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:31:06.511382 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:31:06.511400 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:31:06.582246 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:31:06.573607   13739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:06.574274   13739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:06.576164   13739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:06.576589   13739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:06.578115   13739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:31:06.573607   13739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:06.574274   13739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:06.576164   13739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:06.576589   13739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:06.578115   13739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:31:06.582256 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:31:06.582266 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:31:06.644909 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:31:06.644931 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:31:09.176116 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:31:09.186886 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:31:09.186957 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:31:09.212045 1446402 cri.go:96] found id: ""
	I1222 00:31:09.212081 1446402 logs.go:282] 0 containers: []
	W1222 00:31:09.212088 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:31:09.212094 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:31:09.212169 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:31:09.237345 1446402 cri.go:96] found id: ""
	I1222 00:31:09.237360 1446402 logs.go:282] 0 containers: []
	W1222 00:31:09.237367 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:31:09.237373 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:31:09.237435 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:31:09.262938 1446402 cri.go:96] found id: ""
	I1222 00:31:09.262953 1446402 logs.go:282] 0 containers: []
	W1222 00:31:09.262960 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:31:09.262966 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:31:09.263027 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:31:09.288202 1446402 cri.go:96] found id: ""
	I1222 00:31:09.288216 1446402 logs.go:282] 0 containers: []
	W1222 00:31:09.288223 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:31:09.288228 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:31:09.288291 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:31:09.313061 1446402 cri.go:96] found id: ""
	I1222 00:31:09.313075 1446402 logs.go:282] 0 containers: []
	W1222 00:31:09.313083 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:31:09.313088 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:31:09.313151 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:31:09.342668 1446402 cri.go:96] found id: ""
	I1222 00:31:09.342683 1446402 logs.go:282] 0 containers: []
	W1222 00:31:09.342691 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:31:09.342696 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:31:09.342760 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:31:09.370215 1446402 cri.go:96] found id: ""
	I1222 00:31:09.370239 1446402 logs.go:282] 0 containers: []
	W1222 00:31:09.370249 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:31:09.370258 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:31:09.370270 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:31:09.433823 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:31:09.425161   13835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:09.425608   13835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:09.427450   13835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:09.428117   13835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:09.429893   13835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:31:09.425161   13835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:09.425608   13835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:09.427450   13835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:09.428117   13835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:09.429893   13835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:31:09.433834 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:31:09.433846 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:31:09.496002 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:31:09.496024 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:31:09.538432 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:31:09.538457 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:31:09.599912 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:31:09.599933 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:31:12.117068 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:31:12.128268 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:31:12.128331 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:31:12.154851 1446402 cri.go:96] found id: ""
	I1222 00:31:12.154865 1446402 logs.go:282] 0 containers: []
	W1222 00:31:12.154873 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:31:12.154878 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:31:12.154961 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:31:12.180838 1446402 cri.go:96] found id: ""
	I1222 00:31:12.180852 1446402 logs.go:282] 0 containers: []
	W1222 00:31:12.180860 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:31:12.180865 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:31:12.180927 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:31:12.205653 1446402 cri.go:96] found id: ""
	I1222 00:31:12.205667 1446402 logs.go:282] 0 containers: []
	W1222 00:31:12.205683 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:31:12.205689 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:31:12.205760 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:31:12.232339 1446402 cri.go:96] found id: ""
	I1222 00:31:12.232352 1446402 logs.go:282] 0 containers: []
	W1222 00:31:12.232360 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:31:12.232365 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:31:12.232425 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:31:12.257997 1446402 cri.go:96] found id: ""
	I1222 00:31:12.258013 1446402 logs.go:282] 0 containers: []
	W1222 00:31:12.258020 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:31:12.258026 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:31:12.258113 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:31:12.282449 1446402 cri.go:96] found id: ""
	I1222 00:31:12.282464 1446402 logs.go:282] 0 containers: []
	W1222 00:31:12.282472 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:31:12.282478 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:31:12.282548 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:31:12.308351 1446402 cri.go:96] found id: ""
	I1222 00:31:12.308365 1446402 logs.go:282] 0 containers: []
	W1222 00:31:12.308372 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:31:12.308380 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:31:12.308391 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:31:12.365268 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:31:12.365286 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:31:12.381163 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:31:12.381180 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:31:12.448592 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:31:12.440189   13948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:12.440992   13948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:12.442650   13948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:12.443125   13948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:12.444728   13948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:31:12.440189   13948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:12.440992   13948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:12.442650   13948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:12.443125   13948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:12.444728   13948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:31:12.448603 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:31:12.448614 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:31:12.512421 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:31:12.512440 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:31:15.042734 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:31:15.076968 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:31:15.077038 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:31:15.105454 1446402 cri.go:96] found id: ""
	I1222 00:31:15.105469 1446402 logs.go:282] 0 containers: []
	W1222 00:31:15.105477 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:31:15.105484 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:31:15.105548 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:31:15.133491 1446402 cri.go:96] found id: ""
	I1222 00:31:15.133517 1446402 logs.go:282] 0 containers: []
	W1222 00:31:15.133525 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:31:15.133531 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:31:15.133610 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:31:15.161141 1446402 cri.go:96] found id: ""
	I1222 00:31:15.161155 1446402 logs.go:282] 0 containers: []
	W1222 00:31:15.161162 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:31:15.161168 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:31:15.161243 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:31:15.189035 1446402 cri.go:96] found id: ""
	I1222 00:31:15.189062 1446402 logs.go:282] 0 containers: []
	W1222 00:31:15.189071 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:31:15.189077 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:31:15.189153 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:31:15.215453 1446402 cri.go:96] found id: ""
	I1222 00:31:15.215467 1446402 logs.go:282] 0 containers: []
	W1222 00:31:15.215474 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:31:15.215479 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:31:15.215542 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:31:15.241518 1446402 cri.go:96] found id: ""
	I1222 00:31:15.241542 1446402 logs.go:282] 0 containers: []
	W1222 00:31:15.241550 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:31:15.241556 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:31:15.241627 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:31:15.270847 1446402 cri.go:96] found id: ""
	I1222 00:31:15.270862 1446402 logs.go:282] 0 containers: []
	W1222 00:31:15.270878 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:31:15.270886 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:31:15.270896 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:31:15.329892 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:31:15.329919 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:31:15.345769 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:31:15.345787 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:31:15.412686 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:31:15.403867   14050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:15.404399   14050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:15.406057   14050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:15.406571   14050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:15.408071   14050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:31:15.403867   14050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:15.404399   14050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:15.406057   14050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:15.406571   14050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:15.408071   14050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:31:15.412697 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:31:15.412708 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:31:15.475513 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:31:15.475533 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:31:18.013729 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:31:18.025498 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:31:18.025570 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:31:18.051451 1446402 cri.go:96] found id: ""
	I1222 00:31:18.051466 1446402 logs.go:282] 0 containers: []
	W1222 00:31:18.051473 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:31:18.051478 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:31:18.051540 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:31:18.078412 1446402 cri.go:96] found id: ""
	I1222 00:31:18.078428 1446402 logs.go:282] 0 containers: []
	W1222 00:31:18.078436 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:31:18.078442 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:31:18.078511 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:31:18.105039 1446402 cri.go:96] found id: ""
	I1222 00:31:18.105054 1446402 logs.go:282] 0 containers: []
	W1222 00:31:18.105062 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:31:18.105067 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:31:18.105129 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:31:18.132285 1446402 cri.go:96] found id: ""
	I1222 00:31:18.132300 1446402 logs.go:282] 0 containers: []
	W1222 00:31:18.132308 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:31:18.132314 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:31:18.132379 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:31:18.160762 1446402 cri.go:96] found id: ""
	I1222 00:31:18.160781 1446402 logs.go:282] 0 containers: []
	W1222 00:31:18.160788 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:31:18.160794 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:31:18.160855 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:31:18.187281 1446402 cri.go:96] found id: ""
	I1222 00:31:18.187295 1446402 logs.go:282] 0 containers: []
	W1222 00:31:18.187303 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:31:18.187308 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:31:18.187369 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:31:18.214033 1446402 cri.go:96] found id: ""
	I1222 00:31:18.214048 1446402 logs.go:282] 0 containers: []
	W1222 00:31:18.214055 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:31:18.214062 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:31:18.214072 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:31:18.274937 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:31:18.274957 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:31:18.291496 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:31:18.291514 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:31:18.356830 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:31:18.348577   14155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:18.349325   14155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:18.351002   14155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:18.351500   14155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:18.353025   14155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:31:18.348577   14155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:18.349325   14155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:18.351002   14155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:18.351500   14155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:18.353025   14155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:31:18.356841 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:31:18.356851 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:31:18.420006 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:31:18.420026 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:31:20.955836 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:31:20.966430 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:31:20.966499 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:31:20.992202 1446402 cri.go:96] found id: ""
	I1222 00:31:20.992216 1446402 logs.go:282] 0 containers: []
	W1222 00:31:20.992223 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:31:20.992229 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:31:20.992292 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:31:21.020435 1446402 cri.go:96] found id: ""
	I1222 00:31:21.020449 1446402 logs.go:282] 0 containers: []
	W1222 00:31:21.020456 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:31:21.020462 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:31:21.020525 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:31:21.045920 1446402 cri.go:96] found id: ""
	I1222 00:31:21.045934 1446402 logs.go:282] 0 containers: []
	W1222 00:31:21.045940 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:31:21.045945 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:31:21.046007 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:31:21.069898 1446402 cri.go:96] found id: ""
	I1222 00:31:21.069912 1446402 logs.go:282] 0 containers: []
	W1222 00:31:21.069920 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:31:21.069926 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:31:21.069986 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:31:21.096061 1446402 cri.go:96] found id: ""
	I1222 00:31:21.096075 1446402 logs.go:282] 0 containers: []
	W1222 00:31:21.096082 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:31:21.096088 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:31:21.096152 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:31:21.121380 1446402 cri.go:96] found id: ""
	I1222 00:31:21.121394 1446402 logs.go:282] 0 containers: []
	W1222 00:31:21.121401 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:31:21.121407 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:31:21.121473 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:31:21.147060 1446402 cri.go:96] found id: ""
	I1222 00:31:21.147083 1446402 logs.go:282] 0 containers: []
	W1222 00:31:21.147091 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:31:21.147098 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:31:21.147110 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:31:21.163066 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:31:21.163085 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:31:21.229457 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:31:21.220665   14257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:21.221438   14257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:21.223287   14257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:21.223780   14257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:21.225557   14257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:31:21.220665   14257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:21.221438   14257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:21.223287   14257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:21.223780   14257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:21.225557   14257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:31:21.229467 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:31:21.229482 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:31:21.296323 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:31:21.296342 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:31:21.329392 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:31:21.329409 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:31:23.886587 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:31:23.896889 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:31:23.896949 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:31:23.921855 1446402 cri.go:96] found id: ""
	I1222 00:31:23.921870 1446402 logs.go:282] 0 containers: []
	W1222 00:31:23.921878 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:31:23.921883 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:31:23.921943 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:31:23.947445 1446402 cri.go:96] found id: ""
	I1222 00:31:23.947459 1446402 logs.go:282] 0 containers: []
	W1222 00:31:23.947466 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:31:23.947471 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:31:23.947532 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:31:23.973150 1446402 cri.go:96] found id: ""
	I1222 00:31:23.973164 1446402 logs.go:282] 0 containers: []
	W1222 00:31:23.973171 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:31:23.973176 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:31:23.973236 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:31:24.000119 1446402 cri.go:96] found id: ""
	I1222 00:31:24.000133 1446402 logs.go:282] 0 containers: []
	W1222 00:31:24.000140 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:31:24.000145 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:31:24.000208 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:31:24.028319 1446402 cri.go:96] found id: ""
	I1222 00:31:24.028333 1446402 logs.go:282] 0 containers: []
	W1222 00:31:24.028341 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:31:24.028346 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:31:24.028416 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:31:24.054514 1446402 cri.go:96] found id: ""
	I1222 00:31:24.054528 1446402 logs.go:282] 0 containers: []
	W1222 00:31:24.054536 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:31:24.054541 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:31:24.054623 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:31:24.079783 1446402 cri.go:96] found id: ""
	I1222 00:31:24.079796 1446402 logs.go:282] 0 containers: []
	W1222 00:31:24.079804 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:31:24.079812 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:31:24.079823 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:31:24.136543 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:31:24.136563 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:31:24.152385 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:31:24.152402 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:31:24.219394 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:31:24.210731   14362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:24.211515   14362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:24.213293   14362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:24.213872   14362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:24.215420   14362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:31:24.210731   14362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:24.211515   14362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:24.213293   14362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:24.213872   14362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:24.215420   14362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:31:24.219403 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:31:24.219413 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:31:24.282313 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:31:24.282331 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:31:26.811961 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:31:26.822374 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:31:26.822443 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:31:26.851730 1446402 cri.go:96] found id: ""
	I1222 00:31:26.851745 1446402 logs.go:282] 0 containers: []
	W1222 00:31:26.851753 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:31:26.851758 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:31:26.851820 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:31:26.876518 1446402 cri.go:96] found id: ""
	I1222 00:31:26.876533 1446402 logs.go:282] 0 containers: []
	W1222 00:31:26.876540 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:31:26.876545 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:31:26.876614 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:31:26.906243 1446402 cri.go:96] found id: ""
	I1222 00:31:26.906258 1446402 logs.go:282] 0 containers: []
	W1222 00:31:26.906265 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:31:26.906271 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:31:26.906332 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:31:26.933029 1446402 cri.go:96] found id: ""
	I1222 00:31:26.933043 1446402 logs.go:282] 0 containers: []
	W1222 00:31:26.933050 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:31:26.933056 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:31:26.933124 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:31:26.962389 1446402 cri.go:96] found id: ""
	I1222 00:31:26.962404 1446402 logs.go:282] 0 containers: []
	W1222 00:31:26.962411 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:31:26.962417 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:31:26.962478 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:31:26.986566 1446402 cri.go:96] found id: ""
	I1222 00:31:26.986579 1446402 logs.go:282] 0 containers: []
	W1222 00:31:26.986587 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:31:26.986593 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:31:26.986654 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:31:27.013857 1446402 cri.go:96] found id: ""
	I1222 00:31:27.013872 1446402 logs.go:282] 0 containers: []
	W1222 00:31:27.013885 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:31:27.013896 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:31:27.013907 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:31:27.072155 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:31:27.072174 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:31:27.088000 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:31:27.088018 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:31:27.155219 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:31:27.146314   14467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:27.147060   14467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:27.148896   14467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:27.149539   14467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:27.151255   14467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:31:27.146314   14467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:27.147060   14467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:27.148896   14467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:27.149539   14467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:27.151255   14467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:31:27.155229 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:31:27.155240 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:31:27.220122 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:31:27.220142 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:31:29.756602 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:31:29.767503 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:31:29.767576 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:31:29.796758 1446402 cri.go:96] found id: ""
	I1222 00:31:29.796773 1446402 logs.go:282] 0 containers: []
	W1222 00:31:29.796781 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:31:29.796786 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:31:29.796848 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:31:29.826111 1446402 cri.go:96] found id: ""
	I1222 00:31:29.826125 1446402 logs.go:282] 0 containers: []
	W1222 00:31:29.826133 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:31:29.826138 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:31:29.826199 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:31:29.851803 1446402 cri.go:96] found id: ""
	I1222 00:31:29.851817 1446402 logs.go:282] 0 containers: []
	W1222 00:31:29.851827 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:31:29.851833 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:31:29.851893 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:31:29.877952 1446402 cri.go:96] found id: ""
	I1222 00:31:29.877966 1446402 logs.go:282] 0 containers: []
	W1222 00:31:29.877973 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:31:29.877979 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:31:29.878041 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:31:29.902393 1446402 cri.go:96] found id: ""
	I1222 00:31:29.902406 1446402 logs.go:282] 0 containers: []
	W1222 00:31:29.902414 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:31:29.902419 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:31:29.902499 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:31:29.930875 1446402 cri.go:96] found id: ""
	I1222 00:31:29.930889 1446402 logs.go:282] 0 containers: []
	W1222 00:31:29.930896 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:31:29.930901 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:31:29.930961 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:31:29.954467 1446402 cri.go:96] found id: ""
	I1222 00:31:29.954481 1446402 logs.go:282] 0 containers: []
	W1222 00:31:29.954488 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:31:29.954496 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:31:29.954506 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:31:30.022300 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:31:30.022322 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:31:30.101450 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:31:30.101468 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:31:30.160615 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:31:30.160637 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:31:30.177543 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:31:30.177570 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:31:30.250821 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:31:30.242174   14583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:30.242862   14583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:30.243862   14583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:30.244583   14583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:30.246440   14583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:31:30.242174   14583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:30.242862   14583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:30.243862   14583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:30.244583   14583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:30.246440   14583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:31:32.751739 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:31:32.762856 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:31:32.762918 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:31:32.788176 1446402 cri.go:96] found id: ""
	I1222 00:31:32.788191 1446402 logs.go:282] 0 containers: []
	W1222 00:31:32.788197 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:31:32.788203 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:31:32.788264 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:31:32.815561 1446402 cri.go:96] found id: ""
	I1222 00:31:32.815575 1446402 logs.go:282] 0 containers: []
	W1222 00:31:32.815582 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:31:32.815587 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:31:32.815648 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:31:32.840208 1446402 cri.go:96] found id: ""
	I1222 00:31:32.840222 1446402 logs.go:282] 0 containers: []
	W1222 00:31:32.840229 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:31:32.840235 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:31:32.840298 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:31:32.865041 1446402 cri.go:96] found id: ""
	I1222 00:31:32.865055 1446402 logs.go:282] 0 containers: []
	W1222 00:31:32.865062 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:31:32.865068 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:31:32.865127 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:31:32.891852 1446402 cri.go:96] found id: ""
	I1222 00:31:32.891871 1446402 logs.go:282] 0 containers: []
	W1222 00:31:32.891879 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:31:32.891884 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:31:32.891956 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:31:32.916991 1446402 cri.go:96] found id: ""
	I1222 00:31:32.917005 1446402 logs.go:282] 0 containers: []
	W1222 00:31:32.917013 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:31:32.917018 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:31:32.917078 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:31:32.944551 1446402 cri.go:96] found id: ""
	I1222 00:31:32.944564 1446402 logs.go:282] 0 containers: []
	W1222 00:31:32.944571 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:31:32.944579 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:31:32.944589 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:31:33.001246 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:31:33.001270 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:31:33.021275 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:31:33.021294 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:31:33.093331 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:31:33.085315   14674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:33.086145   14674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:33.087144   14674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:33.087857   14674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:33.089415   14674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:31:33.085315   14674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:33.086145   14674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:33.087144   14674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:33.087857   14674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:33.089415   14674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:31:33.093342 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:31:33.093353 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:31:33.155921 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:31:33.155942 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:31:35.686392 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:31:35.696748 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:31:35.696809 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:31:35.721721 1446402 cri.go:96] found id: ""
	I1222 00:31:35.721736 1446402 logs.go:282] 0 containers: []
	W1222 00:31:35.721743 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:31:35.721748 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:31:35.721836 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:31:35.769211 1446402 cri.go:96] found id: ""
	I1222 00:31:35.769225 1446402 logs.go:282] 0 containers: []
	W1222 00:31:35.769232 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:31:35.769237 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:31:35.769296 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:31:35.801836 1446402 cri.go:96] found id: ""
	I1222 00:31:35.801850 1446402 logs.go:282] 0 containers: []
	W1222 00:31:35.801857 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:31:35.801863 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:31:35.801925 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:31:35.829689 1446402 cri.go:96] found id: ""
	I1222 00:31:35.829703 1446402 logs.go:282] 0 containers: []
	W1222 00:31:35.829711 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:31:35.829716 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:31:35.829775 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:31:35.855388 1446402 cri.go:96] found id: ""
	I1222 00:31:35.855403 1446402 logs.go:282] 0 containers: []
	W1222 00:31:35.855411 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:31:35.855417 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:31:35.855478 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:31:35.886055 1446402 cri.go:96] found id: ""
	I1222 00:31:35.886070 1446402 logs.go:282] 0 containers: []
	W1222 00:31:35.886105 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:31:35.886112 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:31:35.886177 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:31:35.911567 1446402 cri.go:96] found id: ""
	I1222 00:31:35.911581 1446402 logs.go:282] 0 containers: []
	W1222 00:31:35.911589 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:31:35.911596 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:31:35.911608 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:31:35.978738 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:31:35.969873   14773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:35.970644   14773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:35.972383   14773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:35.972972   14773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:35.974691   14773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:31:35.969873   14773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:35.970644   14773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:35.972383   14773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:35.972972   14773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:35.974691   14773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:31:35.978748 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:31:35.978761 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:31:36.043835 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:31:36.043857 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:31:36.072278 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:31:36.072294 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:31:36.133943 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:31:36.133963 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:31:38.650565 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:31:38.660954 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:31:38.661027 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:31:38.685765 1446402 cri.go:96] found id: ""
	I1222 00:31:38.685780 1446402 logs.go:282] 0 containers: []
	W1222 00:31:38.685787 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:31:38.685793 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:31:38.685859 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:31:38.711272 1446402 cri.go:96] found id: ""
	I1222 00:31:38.711287 1446402 logs.go:282] 0 containers: []
	W1222 00:31:38.711295 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:31:38.711300 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:31:38.711366 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:31:38.739201 1446402 cri.go:96] found id: ""
	I1222 00:31:38.739217 1446402 logs.go:282] 0 containers: []
	W1222 00:31:38.739224 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:31:38.739230 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:31:38.739299 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:31:38.769400 1446402 cri.go:96] found id: ""
	I1222 00:31:38.769414 1446402 logs.go:282] 0 containers: []
	W1222 00:31:38.769421 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:31:38.769426 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:31:38.769486 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:31:38.805681 1446402 cri.go:96] found id: ""
	I1222 00:31:38.805695 1446402 logs.go:282] 0 containers: []
	W1222 00:31:38.805704 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:31:38.805709 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:31:38.805770 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:31:38.831145 1446402 cri.go:96] found id: ""
	I1222 00:31:38.831160 1446402 logs.go:282] 0 containers: []
	W1222 00:31:38.831167 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:31:38.831172 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:31:38.831233 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:31:38.861111 1446402 cri.go:96] found id: ""
	I1222 00:31:38.861125 1446402 logs.go:282] 0 containers: []
	W1222 00:31:38.861132 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:31:38.861140 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:31:38.861150 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:31:38.917581 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:31:38.917601 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:31:38.934979 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:31:38.934997 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:31:39.009642 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:31:38.997287   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:38.997885   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:38.999441   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:39.000040   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:39.004739   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:31:38.997287   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:38.997885   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:38.999441   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:39.000040   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:39.004739   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:31:39.009654 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:31:39.009666 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:31:39.079837 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:31:39.079866 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:31:41.610509 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:31:41.620849 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:31:41.620915 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:31:41.645625 1446402 cri.go:96] found id: ""
	I1222 00:31:41.645639 1446402 logs.go:282] 0 containers: []
	W1222 00:31:41.645647 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:31:41.645652 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:31:41.645715 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:31:41.671325 1446402 cri.go:96] found id: ""
	I1222 00:31:41.671339 1446402 logs.go:282] 0 containers: []
	W1222 00:31:41.671347 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:31:41.671353 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:31:41.671413 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:31:41.695685 1446402 cri.go:96] found id: ""
	I1222 00:31:41.695699 1446402 logs.go:282] 0 containers: []
	W1222 00:31:41.695706 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:31:41.695712 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:31:41.695772 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:31:41.721021 1446402 cri.go:96] found id: ""
	I1222 00:31:41.721034 1446402 logs.go:282] 0 containers: []
	W1222 00:31:41.721042 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:31:41.721047 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:31:41.721108 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:31:41.757975 1446402 cri.go:96] found id: ""
	I1222 00:31:41.757990 1446402 logs.go:282] 0 containers: []
	W1222 00:31:41.757997 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:31:41.758002 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:31:41.758064 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:31:41.802251 1446402 cri.go:96] found id: ""
	I1222 00:31:41.802266 1446402 logs.go:282] 0 containers: []
	W1222 00:31:41.802273 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:31:41.802279 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:31:41.802339 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:31:41.835417 1446402 cri.go:96] found id: ""
	I1222 00:31:41.835433 1446402 logs.go:282] 0 containers: []
	W1222 00:31:41.835439 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:31:41.835447 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:31:41.835458 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:31:41.895808 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:31:41.895827 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:31:41.911760 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:31:41.911776 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:31:41.978878 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:31:41.969798   14988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:41.970501   14988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:41.972132   14988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:41.972665   14988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:41.974279   14988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:31:41.969798   14988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:41.970501   14988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:41.972132   14988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:41.972665   14988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:41.974279   14988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:31:41.978889 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:31:41.978900 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:31:42.043394 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:31:42.043415 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:31:44.576818 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:31:44.587175 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:31:44.587239 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:31:44.613386 1446402 cri.go:96] found id: ""
	I1222 00:31:44.613404 1446402 logs.go:282] 0 containers: []
	W1222 00:31:44.613411 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:31:44.613416 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:31:44.613479 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:31:44.642424 1446402 cri.go:96] found id: ""
	I1222 00:31:44.642444 1446402 logs.go:282] 0 containers: []
	W1222 00:31:44.642451 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:31:44.642456 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:31:44.642517 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:31:44.671623 1446402 cri.go:96] found id: ""
	I1222 00:31:44.671637 1446402 logs.go:282] 0 containers: []
	W1222 00:31:44.671645 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:31:44.671650 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:31:44.671720 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:31:44.697114 1446402 cri.go:96] found id: ""
	I1222 00:31:44.697128 1446402 logs.go:282] 0 containers: []
	W1222 00:31:44.697135 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:31:44.697140 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:31:44.697199 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:31:44.724199 1446402 cri.go:96] found id: ""
	I1222 00:31:44.724213 1446402 logs.go:282] 0 containers: []
	W1222 00:31:44.724220 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:31:44.724226 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:31:44.724298 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:31:44.765403 1446402 cri.go:96] found id: ""
	I1222 00:31:44.765417 1446402 logs.go:282] 0 containers: []
	W1222 00:31:44.765436 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:31:44.765443 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:31:44.765510 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:31:44.795984 1446402 cri.go:96] found id: ""
	I1222 00:31:44.795999 1446402 logs.go:282] 0 containers: []
	W1222 00:31:44.796017 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:31:44.796026 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:31:44.796037 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:31:44.855400 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:31:44.855420 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:31:44.872483 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:31:44.872501 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:31:44.941437 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:31:44.933277   15095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:44.933850   15095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:44.935365   15095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:44.935699   15095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:44.937038   15095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:31:44.933277   15095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:44.933850   15095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:44.935365   15095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:44.935699   15095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:44.937038   15095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:31:44.941449 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:31:44.941460 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:31:45.004528 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:31:45.004550 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:31:47.556363 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:31:47.566634 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:31:47.566695 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:31:47.593291 1446402 cri.go:96] found id: ""
	I1222 00:31:47.593305 1446402 logs.go:282] 0 containers: []
	W1222 00:31:47.593312 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:31:47.593318 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:31:47.593387 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:31:47.617921 1446402 cri.go:96] found id: ""
	I1222 00:31:47.617935 1446402 logs.go:282] 0 containers: []
	W1222 00:31:47.617942 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:31:47.617947 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:31:47.618007 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:31:47.644745 1446402 cri.go:96] found id: ""
	I1222 00:31:47.644759 1446402 logs.go:282] 0 containers: []
	W1222 00:31:47.644766 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:31:47.644772 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:31:47.644831 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:31:47.669635 1446402 cri.go:96] found id: ""
	I1222 00:31:47.669649 1446402 logs.go:282] 0 containers: []
	W1222 00:31:47.669656 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:31:47.669661 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:31:47.669721 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:31:47.696237 1446402 cri.go:96] found id: ""
	I1222 00:31:47.696251 1446402 logs.go:282] 0 containers: []
	W1222 00:31:47.696258 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:31:47.696263 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:31:47.696321 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:31:47.720858 1446402 cri.go:96] found id: ""
	I1222 00:31:47.720877 1446402 logs.go:282] 0 containers: []
	W1222 00:31:47.720884 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:31:47.720890 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:31:47.720950 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:31:47.759042 1446402 cri.go:96] found id: ""
	I1222 00:31:47.759056 1446402 logs.go:282] 0 containers: []
	W1222 00:31:47.759064 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:31:47.759071 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:31:47.759088 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:31:47.775637 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:31:47.775652 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:31:47.848304 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:31:47.837609   15197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:47.838360   15197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:47.840008   15197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:47.842174   15197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:47.842911   15197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:31:47.837609   15197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:47.838360   15197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:47.840008   15197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:47.842174   15197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:47.842911   15197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:31:47.848314 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:31:47.848326 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:31:47.910821 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:31:47.910839 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:31:47.939115 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:31:47.939131 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:31:50.495637 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:31:50.506061 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:31:50.506147 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:31:50.531619 1446402 cri.go:96] found id: ""
	I1222 00:31:50.531634 1446402 logs.go:282] 0 containers: []
	W1222 00:31:50.531641 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:31:50.531647 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:31:50.531707 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:31:50.556202 1446402 cri.go:96] found id: ""
	I1222 00:31:50.556215 1446402 logs.go:282] 0 containers: []
	W1222 00:31:50.556222 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:31:50.556228 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:31:50.556289 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:31:50.580637 1446402 cri.go:96] found id: ""
	I1222 00:31:50.580651 1446402 logs.go:282] 0 containers: []
	W1222 00:31:50.580658 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:31:50.580663 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:31:50.580726 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:31:50.605112 1446402 cri.go:96] found id: ""
	I1222 00:31:50.605126 1446402 logs.go:282] 0 containers: []
	W1222 00:31:50.605133 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:31:50.605138 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:31:50.605198 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:31:50.629268 1446402 cri.go:96] found id: ""
	I1222 00:31:50.629283 1446402 logs.go:282] 0 containers: []
	W1222 00:31:50.629290 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:31:50.629295 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:31:50.629356 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:31:50.655550 1446402 cri.go:96] found id: ""
	I1222 00:31:50.655564 1446402 logs.go:282] 0 containers: []
	W1222 00:31:50.655571 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:31:50.655576 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:31:50.655635 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:31:50.683838 1446402 cri.go:96] found id: ""
	I1222 00:31:50.683852 1446402 logs.go:282] 0 containers: []
	W1222 00:31:50.683859 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:31:50.683866 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:31:50.683877 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:31:50.739538 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:31:50.739556 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:31:50.759933 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:31:50.759948 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:31:50.837166 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:31:50.827625   15305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:50.828436   15305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:50.830329   15305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:50.830912   15305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:50.832535   15305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:31:50.827625   15305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:50.828436   15305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:50.830329   15305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:50.830912   15305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:50.832535   15305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:31:50.837177 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:31:50.837188 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:31:50.902694 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:31:50.902713 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:31:53.430394 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:31:53.441567 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:31:53.441627 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:31:53.468013 1446402 cri.go:96] found id: ""
	I1222 00:31:53.468027 1446402 logs.go:282] 0 containers: []
	W1222 00:31:53.468034 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:31:53.468039 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:31:53.468109 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:31:53.494162 1446402 cri.go:96] found id: ""
	I1222 00:31:53.494176 1446402 logs.go:282] 0 containers: []
	W1222 00:31:53.494183 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:31:53.494188 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:31:53.494248 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:31:53.524039 1446402 cri.go:96] found id: ""
	I1222 00:31:53.524061 1446402 logs.go:282] 0 containers: []
	W1222 00:31:53.524068 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:31:53.524074 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:31:53.524137 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:31:53.548965 1446402 cri.go:96] found id: ""
	I1222 00:31:53.548979 1446402 logs.go:282] 0 containers: []
	W1222 00:31:53.548987 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:31:53.548992 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:31:53.549054 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:31:53.580216 1446402 cri.go:96] found id: ""
	I1222 00:31:53.580231 1446402 logs.go:282] 0 containers: []
	W1222 00:31:53.580238 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:31:53.580244 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:31:53.580304 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:31:53.605286 1446402 cri.go:96] found id: ""
	I1222 00:31:53.605301 1446402 logs.go:282] 0 containers: []
	W1222 00:31:53.605308 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:31:53.605314 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:31:53.605391 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:31:53.630900 1446402 cri.go:96] found id: ""
	I1222 00:31:53.630915 1446402 logs.go:282] 0 containers: []
	W1222 00:31:53.630922 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:31:53.630930 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:31:53.630940 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:31:53.686921 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:31:53.686939 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:31:53.704267 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:31:53.704290 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:31:53.789032 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:31:53.778364   15410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:53.780153   15410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:53.781189   15410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:53.783313   15410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:53.784665   15410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:31:53.778364   15410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:53.780153   15410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:53.781189   15410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:53.783313   15410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:53.784665   15410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:31:53.789043 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:31:53.789054 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:31:53.855439 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:31:53.855459 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:31:56.386602 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:31:56.396636 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:31:56.396695 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:31:56.419622 1446402 cri.go:96] found id: ""
	I1222 00:31:56.419635 1446402 logs.go:282] 0 containers: []
	W1222 00:31:56.419642 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:31:56.419647 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:31:56.419711 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:31:56.443068 1446402 cri.go:96] found id: ""
	I1222 00:31:56.443082 1446402 logs.go:282] 0 containers: []
	W1222 00:31:56.443088 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:31:56.443094 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:31:56.443151 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:31:56.468547 1446402 cri.go:96] found id: ""
	I1222 00:31:56.468561 1446402 logs.go:282] 0 containers: []
	W1222 00:31:56.468568 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:31:56.468573 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:31:56.468639 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:31:56.496420 1446402 cri.go:96] found id: ""
	I1222 00:31:56.496434 1446402 logs.go:282] 0 containers: []
	W1222 00:31:56.496448 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:31:56.496453 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:31:56.496515 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:31:56.521822 1446402 cri.go:96] found id: ""
	I1222 00:31:56.521837 1446402 logs.go:282] 0 containers: []
	W1222 00:31:56.521844 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:31:56.521849 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:31:56.521910 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:31:56.548113 1446402 cri.go:96] found id: ""
	I1222 00:31:56.548127 1446402 logs.go:282] 0 containers: []
	W1222 00:31:56.548135 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:31:56.548142 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:31:56.548205 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:31:56.577150 1446402 cri.go:96] found id: ""
	I1222 00:31:56.577166 1446402 logs.go:282] 0 containers: []
	W1222 00:31:56.577173 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:31:56.577181 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:31:56.577191 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:31:56.635797 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:31:56.635817 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:31:56.651214 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:31:56.651230 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:31:56.716938 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:31:56.708096   15515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:56.708823   15515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:56.710450   15515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:56.710968   15515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:56.712514   15515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:31:56.708096   15515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:56.708823   15515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:56.710450   15515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:56.710968   15515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:56.712514   15515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:31:56.716948 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:31:56.716959 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:31:56.780730 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:31:56.780749 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:31:59.308156 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:31:59.318415 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:31:59.318476 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:31:59.343305 1446402 cri.go:96] found id: ""
	I1222 00:31:59.343319 1446402 logs.go:282] 0 containers: []
	W1222 00:31:59.343326 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:31:59.343332 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:31:59.343390 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:31:59.368501 1446402 cri.go:96] found id: ""
	I1222 00:31:59.368515 1446402 logs.go:282] 0 containers: []
	W1222 00:31:59.368523 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:31:59.368529 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:31:59.368595 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:31:59.394364 1446402 cri.go:96] found id: ""
	I1222 00:31:59.394378 1446402 logs.go:282] 0 containers: []
	W1222 00:31:59.394385 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:31:59.394391 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:31:59.394452 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:31:59.420068 1446402 cri.go:96] found id: ""
	I1222 00:31:59.420082 1446402 logs.go:282] 0 containers: []
	W1222 00:31:59.420089 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:31:59.420094 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:31:59.420160 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:31:59.444153 1446402 cri.go:96] found id: ""
	I1222 00:31:59.444167 1446402 logs.go:282] 0 containers: []
	W1222 00:31:59.444174 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:31:59.444179 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:31:59.444239 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:31:59.473812 1446402 cri.go:96] found id: ""
	I1222 00:31:59.473827 1446402 logs.go:282] 0 containers: []
	W1222 00:31:59.473834 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:31:59.473840 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:31:59.473901 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:31:59.502392 1446402 cri.go:96] found id: ""
	I1222 00:31:59.502405 1446402 logs.go:282] 0 containers: []
	W1222 00:31:59.502412 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:31:59.502420 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:31:59.502429 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:31:59.564094 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:31:59.564114 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:31:59.596168 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:31:59.596186 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:31:59.652216 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:31:59.652236 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:31:59.668263 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:31:59.668278 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:31:59.729801 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:31:59.722174   15630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:59.722594   15630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:59.724162   15630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:59.724501   15630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:59.726011   15630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:31:59.722174   15630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:59.722594   15630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:59.724162   15630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:59.724501   15630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:59.726011   15630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:32:02.230111 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:32:02.241018 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:32:02.241081 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:32:02.266489 1446402 cri.go:96] found id: ""
	I1222 00:32:02.266506 1446402 logs.go:282] 0 containers: []
	W1222 00:32:02.266514 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:32:02.266522 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:32:02.266583 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:32:02.291427 1446402 cri.go:96] found id: ""
	I1222 00:32:02.291451 1446402 logs.go:282] 0 containers: []
	W1222 00:32:02.291459 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:32:02.291465 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:32:02.291532 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:32:02.317575 1446402 cri.go:96] found id: ""
	I1222 00:32:02.317599 1446402 logs.go:282] 0 containers: []
	W1222 00:32:02.317607 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:32:02.317612 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:32:02.317683 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:32:02.346894 1446402 cri.go:96] found id: ""
	I1222 00:32:02.346918 1446402 logs.go:282] 0 containers: []
	W1222 00:32:02.346926 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:32:02.346932 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:32:02.347004 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:32:02.373650 1446402 cri.go:96] found id: ""
	I1222 00:32:02.373676 1446402 logs.go:282] 0 containers: []
	W1222 00:32:02.373683 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:32:02.373689 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:32:02.373758 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:32:02.398320 1446402 cri.go:96] found id: ""
	I1222 00:32:02.398334 1446402 logs.go:282] 0 containers: []
	W1222 00:32:02.398341 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:32:02.398347 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:32:02.398416 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:32:02.430114 1446402 cri.go:96] found id: ""
	I1222 00:32:02.430128 1446402 logs.go:282] 0 containers: []
	W1222 00:32:02.430136 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:32:02.430144 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:32:02.430154 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:32:02.485528 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:32:02.485549 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:32:02.501732 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:32:02.501748 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:32:02.566784 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:32:02.558532   15720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:02.559242   15720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:02.560819   15720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:02.561301   15720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:02.562756   15720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:32:02.558532   15720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:02.559242   15720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:02.560819   15720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:02.561301   15720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:02.562756   15720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:32:02.566793 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:32:02.566804 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:32:02.631159 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:32:02.631178 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:32:05.163426 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:32:05.173887 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:32:05.173961 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:32:05.199160 1446402 cri.go:96] found id: ""
	I1222 00:32:05.199174 1446402 logs.go:282] 0 containers: []
	W1222 00:32:05.199181 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:32:05.199187 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:32:05.199257 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:32:05.223620 1446402 cri.go:96] found id: ""
	I1222 00:32:05.223634 1446402 logs.go:282] 0 containers: []
	W1222 00:32:05.223641 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:32:05.223647 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:32:05.223706 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:32:05.248870 1446402 cri.go:96] found id: ""
	I1222 00:32:05.248885 1446402 logs.go:282] 0 containers: []
	W1222 00:32:05.248893 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:32:05.248898 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:32:05.248961 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:32:05.274824 1446402 cri.go:96] found id: ""
	I1222 00:32:05.274839 1446402 logs.go:282] 0 containers: []
	W1222 00:32:05.274846 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:32:05.274851 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:32:05.274910 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:32:05.300225 1446402 cri.go:96] found id: ""
	I1222 00:32:05.300239 1446402 logs.go:282] 0 containers: []
	W1222 00:32:05.300251 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:32:05.300257 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:32:05.300317 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:32:05.324470 1446402 cri.go:96] found id: ""
	I1222 00:32:05.324484 1446402 logs.go:282] 0 containers: []
	W1222 00:32:05.324492 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:32:05.324500 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:32:05.324563 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:32:05.352629 1446402 cri.go:96] found id: ""
	I1222 00:32:05.352647 1446402 logs.go:282] 0 containers: []
	W1222 00:32:05.352655 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:32:05.352666 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:32:05.352677 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:32:05.415991 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:32:05.416014 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:32:05.431828 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:32:05.431845 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:32:05.498339 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:32:05.489677   15826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:05.490403   15826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:05.492216   15826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:05.492835   15826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:05.494413   15826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:32:05.489677   15826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:05.490403   15826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:05.492216   15826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:05.492835   15826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:05.494413   15826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:32:05.498349 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:32:05.498364 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:32:05.563506 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:32:05.563525 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:32:08.094246 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:32:08.105089 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:32:08.105172 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:32:08.132175 1446402 cri.go:96] found id: ""
	I1222 00:32:08.132203 1446402 logs.go:282] 0 containers: []
	W1222 00:32:08.132211 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:32:08.132217 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:32:08.132280 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:32:08.158101 1446402 cri.go:96] found id: ""
	I1222 00:32:08.158115 1446402 logs.go:282] 0 containers: []
	W1222 00:32:08.158122 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:32:08.158128 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:32:08.158204 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:32:08.187238 1446402 cri.go:96] found id: ""
	I1222 00:32:08.187252 1446402 logs.go:282] 0 containers: []
	W1222 00:32:08.187259 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:32:08.187265 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:32:08.187325 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:32:08.211742 1446402 cri.go:96] found id: ""
	I1222 00:32:08.211756 1446402 logs.go:282] 0 containers: []
	W1222 00:32:08.211763 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:32:08.211768 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:32:08.211830 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:32:08.236099 1446402 cri.go:96] found id: ""
	I1222 00:32:08.236113 1446402 logs.go:282] 0 containers: []
	W1222 00:32:08.236120 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:32:08.236126 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:32:08.236199 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:32:08.261393 1446402 cri.go:96] found id: ""
	I1222 00:32:08.261407 1446402 logs.go:282] 0 containers: []
	W1222 00:32:08.261424 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:32:08.261430 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:32:08.261498 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:32:08.288417 1446402 cri.go:96] found id: ""
	I1222 00:32:08.288439 1446402 logs.go:282] 0 containers: []
	W1222 00:32:08.288447 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:32:08.288456 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:32:08.288467 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:32:08.304103 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:32:08.304124 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:32:08.368642 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:32:08.359974   15929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:08.360518   15929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:08.362313   15929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:08.362653   15929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:08.364223   15929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:32:08.359974   15929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:08.360518   15929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:08.362313   15929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:08.362653   15929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:08.364223   15929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:32:08.368652 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:32:08.368663 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:32:08.430523 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:32:08.430543 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:32:08.458205 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:32:08.458222 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:32:11.020855 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:32:11.033129 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:32:11.033201 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:32:11.063371 1446402 cri.go:96] found id: ""
	I1222 00:32:11.063385 1446402 logs.go:282] 0 containers: []
	W1222 00:32:11.063392 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:32:11.063398 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:32:11.063479 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:32:11.089853 1446402 cri.go:96] found id: ""
	I1222 00:32:11.089880 1446402 logs.go:282] 0 containers: []
	W1222 00:32:11.089891 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:32:11.089898 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:32:11.089971 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:32:11.120928 1446402 cri.go:96] found id: ""
	I1222 00:32:11.120943 1446402 logs.go:282] 0 containers: []
	W1222 00:32:11.120971 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:32:11.120978 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:32:11.121045 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:32:11.151464 1446402 cri.go:96] found id: ""
	I1222 00:32:11.151502 1446402 logs.go:282] 0 containers: []
	W1222 00:32:11.151510 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:32:11.151516 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:32:11.151589 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:32:11.179209 1446402 cri.go:96] found id: ""
	I1222 00:32:11.179224 1446402 logs.go:282] 0 containers: []
	W1222 00:32:11.179233 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:32:11.179238 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:32:11.179324 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:32:11.205945 1446402 cri.go:96] found id: ""
	I1222 00:32:11.205979 1446402 logs.go:282] 0 containers: []
	W1222 00:32:11.205987 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:32:11.205993 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:32:11.206065 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:32:11.231928 1446402 cri.go:96] found id: ""
	I1222 00:32:11.231942 1446402 logs.go:282] 0 containers: []
	W1222 00:32:11.231949 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:32:11.231957 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:32:11.231967 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:32:11.296038 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:32:11.296064 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:32:11.312748 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:32:11.312764 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:32:11.378465 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:32:11.369616   16034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:11.370358   16034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:11.372167   16034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:11.372861   16034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:11.374622   16034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:32:11.369616   16034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:11.370358   16034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:11.372167   16034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:11.372861   16034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:11.374622   16034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:32:11.378480 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:32:11.378499 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:32:11.444244 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:32:11.444264 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:32:13.977331 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:32:13.989011 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:32:13.989094 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:32:14.028691 1446402 cri.go:96] found id: ""
	I1222 00:32:14.028726 1446402 logs.go:282] 0 containers: []
	W1222 00:32:14.028734 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:32:14.028739 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:32:14.028810 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:32:14.055710 1446402 cri.go:96] found id: ""
	I1222 00:32:14.055725 1446402 logs.go:282] 0 containers: []
	W1222 00:32:14.055732 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:32:14.055738 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:32:14.055809 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:32:14.082530 1446402 cri.go:96] found id: ""
	I1222 00:32:14.082546 1446402 logs.go:282] 0 containers: []
	W1222 00:32:14.082553 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:32:14.082559 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:32:14.082625 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:32:14.107817 1446402 cri.go:96] found id: ""
	I1222 00:32:14.107840 1446402 logs.go:282] 0 containers: []
	W1222 00:32:14.107847 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:32:14.107853 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:32:14.107913 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:32:14.136680 1446402 cri.go:96] found id: ""
	I1222 00:32:14.136695 1446402 logs.go:282] 0 containers: []
	W1222 00:32:14.136701 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:32:14.136707 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:32:14.136767 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:32:14.161938 1446402 cri.go:96] found id: ""
	I1222 00:32:14.161961 1446402 logs.go:282] 0 containers: []
	W1222 00:32:14.161968 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:32:14.161974 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:32:14.162041 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:32:14.186794 1446402 cri.go:96] found id: ""
	I1222 00:32:14.186808 1446402 logs.go:282] 0 containers: []
	W1222 00:32:14.186814 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:32:14.186823 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:32:14.186832 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:32:14.242688 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:32:14.242708 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:32:14.259715 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:32:14.259732 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:32:14.326979 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:32:14.317786   16135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:14.319196   16135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:14.319865   16135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:14.321398   16135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:14.321854   16135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:32:14.317786   16135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:14.319196   16135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:14.319865   16135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:14.321398   16135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:14.321854   16135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:32:14.326990 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:32:14.327002 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:32:14.395678 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:32:14.395705 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:32:16.929785 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:32:16.940545 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:32:16.940609 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:32:16.965350 1446402 cri.go:96] found id: ""
	I1222 00:32:16.965365 1446402 logs.go:282] 0 containers: []
	W1222 00:32:16.965372 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:32:16.965378 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:32:16.965441 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:32:17.001431 1446402 cri.go:96] found id: ""
	I1222 00:32:17.001447 1446402 logs.go:282] 0 containers: []
	W1222 00:32:17.001455 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:32:17.001461 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:32:17.001530 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:32:17.045444 1446402 cri.go:96] found id: ""
	I1222 00:32:17.045459 1446402 logs.go:282] 0 containers: []
	W1222 00:32:17.045466 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:32:17.045472 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:32:17.045531 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:32:17.080407 1446402 cri.go:96] found id: ""
	I1222 00:32:17.080422 1446402 logs.go:282] 0 containers: []
	W1222 00:32:17.080429 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:32:17.080435 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:32:17.080500 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:32:17.107785 1446402 cri.go:96] found id: ""
	I1222 00:32:17.107799 1446402 logs.go:282] 0 containers: []
	W1222 00:32:17.107806 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:32:17.107812 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:32:17.107874 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:32:17.133084 1446402 cri.go:96] found id: ""
	I1222 00:32:17.133099 1446402 logs.go:282] 0 containers: []
	W1222 00:32:17.133106 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:32:17.133112 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:32:17.133170 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:32:17.162200 1446402 cri.go:96] found id: ""
	I1222 00:32:17.162215 1446402 logs.go:282] 0 containers: []
	W1222 00:32:17.162222 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:32:17.162232 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:32:17.162243 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:32:17.220080 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:32:17.220098 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:32:17.235955 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:32:17.235971 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:32:17.302399 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:32:17.293671   16241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:17.294273   16241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:17.296022   16241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:17.296566   16241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:17.298287   16241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:32:17.293671   16241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:17.294273   16241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:17.296022   16241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:17.296566   16241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:17.298287   16241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:32:17.302410 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:32:17.302420 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:32:17.365559 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:32:17.365578 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:32:19.896945 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:32:19.907830 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:32:19.907900 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:32:19.933463 1446402 cri.go:96] found id: ""
	I1222 00:32:19.933478 1446402 logs.go:282] 0 containers: []
	W1222 00:32:19.933485 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:32:19.933490 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:32:19.933556 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:32:19.958969 1446402 cri.go:96] found id: ""
	I1222 00:32:19.958983 1446402 logs.go:282] 0 containers: []
	W1222 00:32:19.958990 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:32:19.958996 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:32:19.959057 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:32:19.984725 1446402 cri.go:96] found id: ""
	I1222 00:32:19.984740 1446402 logs.go:282] 0 containers: []
	W1222 00:32:19.984748 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:32:19.984753 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:32:19.984819 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:32:20.030303 1446402 cri.go:96] found id: ""
	I1222 00:32:20.030318 1446402 logs.go:282] 0 containers: []
	W1222 00:32:20.030326 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:32:20.030332 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:32:20.030400 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:32:20.067239 1446402 cri.go:96] found id: ""
	I1222 00:32:20.067254 1446402 logs.go:282] 0 containers: []
	W1222 00:32:20.067262 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:32:20.067268 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:32:20.067336 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:32:20.094147 1446402 cri.go:96] found id: ""
	I1222 00:32:20.094161 1446402 logs.go:282] 0 containers: []
	W1222 00:32:20.094169 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:32:20.094174 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:32:20.094236 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:32:20.120347 1446402 cri.go:96] found id: ""
	I1222 00:32:20.120361 1446402 logs.go:282] 0 containers: []
	W1222 00:32:20.120369 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:32:20.120377 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:32:20.120387 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:32:20.192596 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:32:20.183539   16339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:20.184471   16339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:20.186253   16339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:20.186801   16339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:20.188449   16339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:32:20.183539   16339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:20.184471   16339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:20.186253   16339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:20.186801   16339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:20.188449   16339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:32:20.192608 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:32:20.192620 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:32:20.255011 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:32:20.255031 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:32:20.288327 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:32:20.288344 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:32:20.347178 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:32:20.347196 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:32:22.863692 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:32:22.873845 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:32:22.873915 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:32:22.898717 1446402 cri.go:96] found id: ""
	I1222 00:32:22.898737 1446402 logs.go:282] 0 containers: []
	W1222 00:32:22.898744 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:32:22.898749 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:32:22.898808 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:32:22.923719 1446402 cri.go:96] found id: ""
	I1222 00:32:22.923734 1446402 logs.go:282] 0 containers: []
	W1222 00:32:22.923741 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:32:22.923746 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:32:22.923806 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:32:22.953819 1446402 cri.go:96] found id: ""
	I1222 00:32:22.953834 1446402 logs.go:282] 0 containers: []
	W1222 00:32:22.953841 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:32:22.953847 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:32:22.953908 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:32:22.977769 1446402 cri.go:96] found id: ""
	I1222 00:32:22.977783 1446402 logs.go:282] 0 containers: []
	W1222 00:32:22.977791 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:32:22.977796 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:32:22.977858 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:32:23.011333 1446402 cri.go:96] found id: ""
	I1222 00:32:23.011348 1446402 logs.go:282] 0 containers: []
	W1222 00:32:23.011355 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:32:23.011361 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:32:23.011426 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:32:23.040887 1446402 cri.go:96] found id: ""
	I1222 00:32:23.040900 1446402 logs.go:282] 0 containers: []
	W1222 00:32:23.040907 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:32:23.040913 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:32:23.040973 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:32:23.070583 1446402 cri.go:96] found id: ""
	I1222 00:32:23.070597 1446402 logs.go:282] 0 containers: []
	W1222 00:32:23.070604 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:32:23.070612 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:32:23.070622 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:32:23.087115 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:32:23.087132 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:32:23.152903 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:32:23.144713   16447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:23.145341   16447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:23.146893   16447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:23.147470   16447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:23.149031   16447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:32:23.144713   16447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:23.145341   16447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:23.146893   16447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:23.147470   16447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:23.149031   16447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:32:23.152913 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:32:23.152924 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:32:23.215824 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:32:23.215846 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:32:23.249147 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:32:23.249175 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:32:25.810217 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:32:25.820952 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:32:25.821015 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:32:25.847989 1446402 cri.go:96] found id: ""
	I1222 00:32:25.848004 1446402 logs.go:282] 0 containers: []
	W1222 00:32:25.848011 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:32:25.848016 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:32:25.848091 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:32:25.877243 1446402 cri.go:96] found id: ""
	I1222 00:32:25.877258 1446402 logs.go:282] 0 containers: []
	W1222 00:32:25.877265 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:32:25.877271 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:32:25.877332 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:32:25.902255 1446402 cri.go:96] found id: ""
	I1222 00:32:25.902271 1446402 logs.go:282] 0 containers: []
	W1222 00:32:25.902278 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:32:25.902283 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:32:25.902344 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:32:25.927468 1446402 cri.go:96] found id: ""
	I1222 00:32:25.927482 1446402 logs.go:282] 0 containers: []
	W1222 00:32:25.927489 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:32:25.927495 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:32:25.927559 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:32:25.957558 1446402 cri.go:96] found id: ""
	I1222 00:32:25.957571 1446402 logs.go:282] 0 containers: []
	W1222 00:32:25.957578 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:32:25.957583 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:32:25.957644 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:32:25.982483 1446402 cri.go:96] found id: ""
	I1222 00:32:25.982509 1446402 logs.go:282] 0 containers: []
	W1222 00:32:25.982517 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:32:25.982523 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:32:25.982599 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:32:26.024676 1446402 cri.go:96] found id: ""
	I1222 00:32:26.024691 1446402 logs.go:282] 0 containers: []
	W1222 00:32:26.024698 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:32:26.024706 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:32:26.024724 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:32:26.087946 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:32:26.087968 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:32:26.105041 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:32:26.105066 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:32:26.171303 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:32:26.162481   16554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:26.163042   16554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:26.164767   16554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:26.165348   16554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:26.166856   16554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:32:26.162481   16554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:26.163042   16554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:26.164767   16554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:26.165348   16554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:26.166856   16554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:32:26.171313 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:32:26.171324 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:32:26.239046 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:32:26.239065 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:32:28.769012 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:32:28.779505 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:32:28.779566 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:32:28.804277 1446402 cri.go:96] found id: ""
	I1222 00:32:28.804291 1446402 logs.go:282] 0 containers: []
	W1222 00:32:28.804298 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:32:28.804303 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:32:28.804364 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:32:28.831914 1446402 cri.go:96] found id: ""
	I1222 00:32:28.831927 1446402 logs.go:282] 0 containers: []
	W1222 00:32:28.831935 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:32:28.831940 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:32:28.831999 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:32:28.858930 1446402 cri.go:96] found id: ""
	I1222 00:32:28.858951 1446402 logs.go:282] 0 containers: []
	W1222 00:32:28.858959 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:32:28.858964 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:32:28.859026 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:32:28.884503 1446402 cri.go:96] found id: ""
	I1222 00:32:28.884517 1446402 logs.go:282] 0 containers: []
	W1222 00:32:28.884524 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:32:28.884529 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:32:28.884588 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:32:28.908385 1446402 cri.go:96] found id: ""
	I1222 00:32:28.908399 1446402 logs.go:282] 0 containers: []
	W1222 00:32:28.908406 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:32:28.908412 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:32:28.908471 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:32:28.932216 1446402 cri.go:96] found id: ""
	I1222 00:32:28.932231 1446402 logs.go:282] 0 containers: []
	W1222 00:32:28.932238 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:32:28.932243 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:32:28.932318 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:32:28.960692 1446402 cri.go:96] found id: ""
	I1222 00:32:28.960706 1446402 logs.go:282] 0 containers: []
	W1222 00:32:28.960714 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:32:28.960721 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:32:28.960732 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:32:28.991268 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:32:28.991284 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:32:29.051794 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:32:29.051812 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:32:29.076793 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:32:29.076809 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:32:29.140856 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:32:29.132991   16669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:29.133597   16669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:29.135220   16669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:29.135674   16669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:29.137107   16669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:32:29.132991   16669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:29.133597   16669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:29.135220   16669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:29.135674   16669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:29.137107   16669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:32:29.140866 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:32:29.140877 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:32:31.704016 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:32:31.714529 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:32:31.714593 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:32:31.739665 1446402 cri.go:96] found id: ""
	I1222 00:32:31.739679 1446402 logs.go:282] 0 containers: []
	W1222 00:32:31.739687 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:32:31.739693 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:32:31.739753 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:32:31.764377 1446402 cri.go:96] found id: ""
	I1222 00:32:31.764391 1446402 logs.go:282] 0 containers: []
	W1222 00:32:31.764399 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:32:31.764404 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:32:31.764465 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:32:31.793617 1446402 cri.go:96] found id: ""
	I1222 00:32:31.793631 1446402 logs.go:282] 0 containers: []
	W1222 00:32:31.793638 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:32:31.793644 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:32:31.793709 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:32:31.818025 1446402 cri.go:96] found id: ""
	I1222 00:32:31.818040 1446402 logs.go:282] 0 containers: []
	W1222 00:32:31.818047 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:32:31.818055 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:32:31.818145 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:32:31.848262 1446402 cri.go:96] found id: ""
	I1222 00:32:31.848277 1446402 logs.go:282] 0 containers: []
	W1222 00:32:31.848285 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:32:31.848293 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:32:31.848357 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:32:31.873649 1446402 cri.go:96] found id: ""
	I1222 00:32:31.873663 1446402 logs.go:282] 0 containers: []
	W1222 00:32:31.873670 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:32:31.873676 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:32:31.873739 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:32:31.898375 1446402 cri.go:96] found id: ""
	I1222 00:32:31.898390 1446402 logs.go:282] 0 containers: []
	W1222 00:32:31.898397 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:32:31.898404 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:32:31.898416 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:32:31.955541 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:32:31.955560 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:32:31.971557 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:32:31.971574 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:32:32.067449 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:32:32.057015   16758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:32.057465   16758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:32.059124   16758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:32.060199   16758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:32.063114   16758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:32:32.057015   16758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:32.057465   16758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:32.059124   16758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:32.060199   16758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:32.063114   16758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:32:32.067459 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:32:32.067469 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:32:32.129846 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:32:32.129865 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:32:34.659453 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:32:34.669625 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:32:34.669685 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:32:34.696885 1446402 cri.go:96] found id: ""
	I1222 00:32:34.696900 1446402 logs.go:282] 0 containers: []
	W1222 00:32:34.696907 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:32:34.696912 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:32:34.696972 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:32:34.721026 1446402 cri.go:96] found id: ""
	I1222 00:32:34.721050 1446402 logs.go:282] 0 containers: []
	W1222 00:32:34.721058 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:32:34.721063 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:32:34.721133 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:32:34.745654 1446402 cri.go:96] found id: ""
	I1222 00:32:34.745669 1446402 logs.go:282] 0 containers: []
	W1222 00:32:34.745687 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:32:34.745692 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:32:34.745753 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:32:34.771407 1446402 cri.go:96] found id: ""
	I1222 00:32:34.771422 1446402 logs.go:282] 0 containers: []
	W1222 00:32:34.771429 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:32:34.771434 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:32:34.771502 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:32:34.795734 1446402 cri.go:96] found id: ""
	I1222 00:32:34.795749 1446402 logs.go:282] 0 containers: []
	W1222 00:32:34.795756 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:32:34.795761 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:32:34.795821 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:32:34.824632 1446402 cri.go:96] found id: ""
	I1222 00:32:34.824647 1446402 logs.go:282] 0 containers: []
	W1222 00:32:34.824664 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:32:34.824670 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:32:34.824737 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:32:34.850691 1446402 cri.go:96] found id: ""
	I1222 00:32:34.850705 1446402 logs.go:282] 0 containers: []
	W1222 00:32:34.850713 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:32:34.850721 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:32:34.850732 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:32:34.923721 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:32:34.914435   16859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:34.915282   16859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:34.916952   16859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:34.917512   16859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:34.919258   16859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:32:34.914435   16859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:34.915282   16859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:34.916952   16859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:34.917512   16859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:34.919258   16859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:32:34.923732 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:32:34.923743 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:32:34.988429 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:32:34.988447 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:32:35.032884 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:32:35.032901 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:32:35.094822 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:32:35.094842 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:32:37.611964 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:32:37.625103 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:32:37.625168 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:32:37.652713 1446402 cri.go:96] found id: ""
	I1222 00:32:37.652727 1446402 logs.go:282] 0 containers: []
	W1222 00:32:37.652734 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:32:37.652740 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:32:37.652805 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:32:37.677907 1446402 cri.go:96] found id: ""
	I1222 00:32:37.677921 1446402 logs.go:282] 0 containers: []
	W1222 00:32:37.677928 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:32:37.677934 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:32:37.677996 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:32:37.706882 1446402 cri.go:96] found id: ""
	I1222 00:32:37.706901 1446402 logs.go:282] 0 containers: []
	W1222 00:32:37.706909 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:32:37.706914 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:32:37.706973 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:32:37.734381 1446402 cri.go:96] found id: ""
	I1222 00:32:37.734396 1446402 logs.go:282] 0 containers: []
	W1222 00:32:37.734403 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:32:37.734408 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:32:37.734468 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:32:37.763444 1446402 cri.go:96] found id: ""
	I1222 00:32:37.763464 1446402 logs.go:282] 0 containers: []
	W1222 00:32:37.763483 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:32:37.763489 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:32:37.763559 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:32:37.789695 1446402 cri.go:96] found id: ""
	I1222 00:32:37.789718 1446402 logs.go:282] 0 containers: []
	W1222 00:32:37.789726 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:32:37.789732 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:32:37.789805 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:32:37.818949 1446402 cri.go:96] found id: ""
	I1222 00:32:37.818963 1446402 logs.go:282] 0 containers: []
	W1222 00:32:37.818970 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:32:37.818977 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:32:37.818989 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:32:37.886829 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:32:37.877811   16963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:37.878764   16963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:37.880313   16963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:37.880782   16963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:37.882319   16963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:32:37.877811   16963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:37.878764   16963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:37.880313   16963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:37.880782   16963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:37.882319   16963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:32:37.886840 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:32:37.886850 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:32:37.953234 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:32:37.953253 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:32:37.982264 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:32:37.982280 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:32:38.049773 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:32:38.049792 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:32:40.567633 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:32:40.577940 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:32:40.578000 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:32:40.602023 1446402 cri.go:96] found id: ""
	I1222 00:32:40.602038 1446402 logs.go:282] 0 containers: []
	W1222 00:32:40.602045 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:32:40.602051 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:32:40.602145 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:32:40.630778 1446402 cri.go:96] found id: ""
	I1222 00:32:40.630802 1446402 logs.go:282] 0 containers: []
	W1222 00:32:40.630810 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:32:40.630816 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:32:40.630877 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:32:40.658578 1446402 cri.go:96] found id: ""
	I1222 00:32:40.658592 1446402 logs.go:282] 0 containers: []
	W1222 00:32:40.658599 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:32:40.658605 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:32:40.658669 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:32:40.686369 1446402 cri.go:96] found id: ""
	I1222 00:32:40.686384 1446402 logs.go:282] 0 containers: []
	W1222 00:32:40.686393 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:32:40.686399 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:32:40.686466 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:32:40.712486 1446402 cri.go:96] found id: ""
	I1222 00:32:40.712501 1446402 logs.go:282] 0 containers: []
	W1222 00:32:40.712509 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:32:40.712514 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:32:40.712580 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:32:40.744516 1446402 cri.go:96] found id: ""
	I1222 00:32:40.744531 1446402 logs.go:282] 0 containers: []
	W1222 00:32:40.744538 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:32:40.744544 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:32:40.744609 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:32:40.770724 1446402 cri.go:96] found id: ""
	I1222 00:32:40.770738 1446402 logs.go:282] 0 containers: []
	W1222 00:32:40.770745 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:32:40.770754 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:32:40.770766 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:32:40.787581 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:32:40.787598 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:32:40.853257 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:32:40.844118   17072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:40.845204   17072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:40.846833   17072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:40.847402   17072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:40.849021   17072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:32:40.844118   17072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:40.845204   17072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:40.846833   17072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:40.847402   17072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:40.849021   17072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:32:40.853267 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:32:40.853279 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:32:40.918705 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:32:40.918728 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:32:40.947006 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:32:40.947022 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:32:43.505746 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:32:43.515847 1446402 kubeadm.go:602] duration metric: took 4m1.800425441s to restartPrimaryControlPlane
	W1222 00:32:43.515910 1446402 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1222 00:32:43.515983 1446402 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1222 00:32:43.923830 1446402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 00:32:43.937721 1446402 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1222 00:32:43.945799 1446402 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1222 00:32:43.945856 1446402 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 00:32:43.953730 1446402 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1222 00:32:43.953738 1446402 kubeadm.go:158] found existing configuration files:
	
	I1222 00:32:43.953790 1446402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1222 00:32:43.962117 1446402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1222 00:32:43.962172 1446402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1222 00:32:43.969797 1446402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1222 00:32:43.977738 1446402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1222 00:32:43.977798 1446402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 00:32:43.986214 1446402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1222 00:32:43.994326 1446402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1222 00:32:43.994386 1446402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 00:32:44.004154 1446402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1222 00:32:44.013730 1446402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1222 00:32:44.013800 1446402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 00:32:44.022121 1446402 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1222 00:32:44.061736 1446402 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1222 00:32:44.061785 1446402 kubeadm.go:319] [preflight] Running pre-flight checks
	I1222 00:32:44.140713 1446402 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1222 00:32:44.140778 1446402 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1222 00:32:44.140818 1446402 kubeadm.go:319] OS: Linux
	I1222 00:32:44.140862 1446402 kubeadm.go:319] CGROUPS_CPU: enabled
	I1222 00:32:44.140909 1446402 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1222 00:32:44.140955 1446402 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1222 00:32:44.141002 1446402 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1222 00:32:44.141048 1446402 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1222 00:32:44.141095 1446402 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1222 00:32:44.141140 1446402 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1222 00:32:44.141187 1446402 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1222 00:32:44.141232 1446402 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1222 00:32:44.208774 1446402 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1222 00:32:44.208878 1446402 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1222 00:32:44.208966 1446402 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1222 00:32:44.214899 1446402 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1222 00:32:44.218610 1446402 out.go:252]   - Generating certificates and keys ...
	I1222 00:32:44.218748 1446402 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1222 00:32:44.218821 1446402 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1222 00:32:44.218895 1446402 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1222 00:32:44.218955 1446402 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1222 00:32:44.219024 1446402 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1222 00:32:44.219076 1446402 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1222 00:32:44.219138 1446402 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1222 00:32:44.219198 1446402 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1222 00:32:44.219270 1446402 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1222 00:32:44.219343 1446402 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1222 00:32:44.219380 1446402 kubeadm.go:319] [certs] Using the existing "sa" key
	I1222 00:32:44.219458 1446402 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1222 00:32:44.443111 1446402 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1222 00:32:44.602435 1446402 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1222 00:32:44.699769 1446402 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1222 00:32:44.991502 1446402 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1222 00:32:45.160573 1446402 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1222 00:32:45.170594 1446402 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1222 00:32:45.170674 1446402 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1222 00:32:45.173883 1446402 out.go:252]   - Booting up control plane ...
	I1222 00:32:45.174024 1446402 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1222 00:32:45.174124 1446402 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1222 00:32:45.175745 1446402 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1222 00:32:45.208642 1446402 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1222 00:32:45.208749 1446402 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1222 00:32:45.228521 1446402 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1222 00:32:45.228620 1446402 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1222 00:32:45.228659 1446402 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1222 00:32:45.414555 1446402 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1222 00:32:45.414668 1446402 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1222 00:36:45.414312 1446402 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00033138s
	I1222 00:36:45.414339 1446402 kubeadm.go:319] 
	I1222 00:36:45.414437 1446402 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1222 00:36:45.414497 1446402 kubeadm.go:319] 	- The kubelet is not running
	I1222 00:36:45.414614 1446402 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1222 00:36:45.414622 1446402 kubeadm.go:319] 
	I1222 00:36:45.414721 1446402 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1222 00:36:45.414751 1446402 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1222 00:36:45.414780 1446402 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1222 00:36:45.414783 1446402 kubeadm.go:319] 
	I1222 00:36:45.419351 1446402 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1222 00:36:45.419863 1446402 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1222 00:36:45.420008 1446402 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1222 00:36:45.420300 1446402 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1222 00:36:45.420306 1446402 kubeadm.go:319] 
	I1222 00:36:45.420408 1446402 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1222 00:36:45.420558 1446402 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00033138s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1222 00:36:45.420656 1446402 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1222 00:36:45.827625 1446402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 00:36:45.841758 1446402 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1222 00:36:45.841815 1446402 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 00:36:45.850297 1446402 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1222 00:36:45.850306 1446402 kubeadm.go:158] found existing configuration files:
	
	I1222 00:36:45.850362 1446402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1222 00:36:45.858548 1446402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1222 00:36:45.858613 1446402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1222 00:36:45.866403 1446402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1222 00:36:45.875159 1446402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1222 00:36:45.875216 1446402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 00:36:45.883092 1446402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1222 00:36:45.891274 1446402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1222 00:36:45.891330 1446402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 00:36:45.899439 1446402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1222 00:36:45.907618 1446402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1222 00:36:45.907680 1446402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 00:36:45.915873 1446402 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1222 00:36:45.954554 1446402 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1222 00:36:45.954640 1446402 kubeadm.go:319] [preflight] Running pre-flight checks
	I1222 00:36:46.034225 1446402 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1222 00:36:46.034294 1446402 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1222 00:36:46.034329 1446402 kubeadm.go:319] OS: Linux
	I1222 00:36:46.034372 1446402 kubeadm.go:319] CGROUPS_CPU: enabled
	I1222 00:36:46.034419 1446402 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1222 00:36:46.034466 1446402 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1222 00:36:46.034512 1446402 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1222 00:36:46.034571 1446402 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1222 00:36:46.034626 1446402 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1222 00:36:46.034679 1446402 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1222 00:36:46.034746 1446402 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1222 00:36:46.034795 1446402 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1222 00:36:46.102483 1446402 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1222 00:36:46.102587 1446402 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1222 00:36:46.102678 1446402 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1222 00:36:46.110548 1446402 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1222 00:36:46.114145 1446402 out.go:252]   - Generating certificates and keys ...
	I1222 00:36:46.114232 1446402 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1222 00:36:46.114297 1446402 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1222 00:36:46.114378 1446402 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1222 00:36:46.114438 1446402 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1222 00:36:46.114552 1446402 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1222 00:36:46.114617 1446402 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1222 00:36:46.114681 1446402 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1222 00:36:46.114756 1446402 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1222 00:36:46.114832 1446402 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1222 00:36:46.114915 1446402 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1222 00:36:46.114959 1446402 kubeadm.go:319] [certs] Using the existing "sa" key
	I1222 00:36:46.115024 1446402 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1222 00:36:46.590004 1446402 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1222 00:36:46.981109 1446402 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1222 00:36:47.331562 1446402 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1222 00:36:47.513275 1446402 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1222 00:36:48.017649 1446402 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1222 00:36:48.018361 1446402 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1222 00:36:48.020999 1446402 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1222 00:36:48.024119 1446402 out.go:252]   - Booting up control plane ...
	I1222 00:36:48.024221 1446402 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1222 00:36:48.024298 1446402 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1222 00:36:48.024363 1446402 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1222 00:36:48.046779 1446402 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1222 00:36:48.047056 1446402 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1222 00:36:48.054716 1446402 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1222 00:36:48.055076 1446402 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1222 00:36:48.055127 1446402 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1222 00:36:48.190129 1446402 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1222 00:36:48.190242 1446402 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1222 00:40:48.190377 1446402 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000238043s
	I1222 00:40:48.190402 1446402 kubeadm.go:319] 
	I1222 00:40:48.190458 1446402 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1222 00:40:48.190495 1446402 kubeadm.go:319] 	- The kubelet is not running
	I1222 00:40:48.190599 1446402 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1222 00:40:48.190604 1446402 kubeadm.go:319] 
	I1222 00:40:48.190706 1446402 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1222 00:40:48.190737 1446402 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1222 00:40:48.190766 1446402 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1222 00:40:48.190769 1446402 kubeadm.go:319] 
	I1222 00:40:48.196227 1446402 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1222 00:40:48.196675 1446402 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1222 00:40:48.196785 1446402 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1222 00:40:48.197020 1446402 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1222 00:40:48.197025 1446402 kubeadm.go:319] 
	I1222 00:40:48.197092 1446402 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1222 00:40:48.197152 1446402 kubeadm.go:403] duration metric: took 12m6.51958097s to StartCluster
	I1222 00:40:48.197184 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:40:48.197246 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:40:48.222444 1446402 cri.go:96] found id: ""
	I1222 00:40:48.222459 1446402 logs.go:282] 0 containers: []
	W1222 00:40:48.222466 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:40:48.222472 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:40:48.222536 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:40:48.256342 1446402 cri.go:96] found id: ""
	I1222 00:40:48.256356 1446402 logs.go:282] 0 containers: []
	W1222 00:40:48.256363 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:40:48.256368 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:40:48.256430 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:40:48.285108 1446402 cri.go:96] found id: ""
	I1222 00:40:48.285122 1446402 logs.go:282] 0 containers: []
	W1222 00:40:48.285129 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:40:48.285135 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:40:48.285196 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:40:48.317753 1446402 cri.go:96] found id: ""
	I1222 00:40:48.317768 1446402 logs.go:282] 0 containers: []
	W1222 00:40:48.317775 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:40:48.317780 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:40:48.317842 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:40:48.347674 1446402 cri.go:96] found id: ""
	I1222 00:40:48.347689 1446402 logs.go:282] 0 containers: []
	W1222 00:40:48.347696 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:40:48.347701 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:40:48.347765 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:40:48.372255 1446402 cri.go:96] found id: ""
	I1222 00:40:48.372268 1446402 logs.go:282] 0 containers: []
	W1222 00:40:48.372275 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:40:48.372281 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:40:48.372339 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:40:48.396691 1446402 cri.go:96] found id: ""
	I1222 00:40:48.396705 1446402 logs.go:282] 0 containers: []
	W1222 00:40:48.396713 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:40:48.396725 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:40:48.396735 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:40:48.455513 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:40:48.455533 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:40:48.471680 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:40:48.471697 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:40:48.541459 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:40:48.532149   20896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:40:48.532736   20896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:40:48.534508   20896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:40:48.535199   20896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:40:48.536956   20896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:40:48.532149   20896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:40:48.532736   20896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:40:48.534508   20896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:40:48.535199   20896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:40:48.536956   20896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:40:48.541473 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:40:48.541483 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:40:48.603413 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:40:48.603432 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 00:40:48.631201 1446402 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000238043s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1222 00:40:48.631242 1446402 out.go:285] * 
	W1222 00:40:48.631304 1446402 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000238043s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1222 00:40:48.631321 1446402 out.go:285] * 
	W1222 00:40:48.633603 1446402 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1222 00:40:48.639700 1446402 out.go:203] 
	W1222 00:40:48.642575 1446402 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000238043s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1222 00:40:48.642620 1446402 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1222 00:40:48.642642 1446402 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1222 00:40:48.645844 1446402 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.248656814Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.248726812Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.248818752Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.248887126Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.248959487Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.249024218Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.249082229Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.249153910Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.249223252Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.249308890Z" level=info msg="Connect containerd service"
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.249702304Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.252215911Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.272726589Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.273135610Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.272971801Z" level=info msg="Start subscribing containerd event"
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.273361942Z" level=info msg="Start recovering state"
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.330860881Z" level=info msg="Start event monitor"
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.331048714Z" level=info msg="Start cni network conf syncer for default"
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.331117121Z" level=info msg="Start streaming server"
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.331184855Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.331242062Z" level=info msg="runtime interface starting up..."
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.331301705Z" level=info msg="starting plugins..."
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.331364582Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.331577047Z" level=info msg="containerd successfully booted in 0.110567s"
	Dec 22 00:28:40 functional-973657 systemd[1]: Started containerd.service - containerd container runtime.
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:40:52.056387   21154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:40:52.056803   21154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:40:52.061404   21154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:40:52.062047   21154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:40:52.063747   21154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec21 21:36] overlayfs: idmapped layers are currently not supported
	[ +34.220408] overlayfs: idmapped layers are currently not supported
	[Dec21 21:37] overlayfs: idmapped layers are currently not supported
	[Dec21 21:38] overlayfs: idmapped layers are currently not supported
	[Dec21 21:39] overlayfs: idmapped layers are currently not supported
	[ +42.728151] overlayfs: idmapped layers are currently not supported
	[Dec21 21:40] overlayfs: idmapped layers are currently not supported
	[Dec21 21:42] overlayfs: idmapped layers are currently not supported
	[Dec21 21:43] overlayfs: idmapped layers are currently not supported
	[Dec21 22:01] overlayfs: idmapped layers are currently not supported
	[Dec21 22:03] overlayfs: idmapped layers are currently not supported
	[Dec21 22:04] overlayfs: idmapped layers are currently not supported
	[Dec21 22:06] overlayfs: idmapped layers are currently not supported
	[Dec21 22:07] overlayfs: idmapped layers are currently not supported
	[Dec21 22:09] kauditd_printk_skb: 8 callbacks suppressed
	[Dec21 22:19] FS-Cache: Duplicate cookie detected
	[  +0.000799] FS-Cache: O-cookie c=000001b7 [p=00000002 fl=222 nc=0 na=1]
	[  +0.000997] FS-Cache: O-cookie d=000000006644c6a1{9P.session} n=0000000059d48210
	[  +0.001156] FS-Cache: O-key=[10] '34333231303139373837'
	[  +0.000780] FS-Cache: N-cookie c=000001b8 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000956] FS-Cache: N-cookie d=000000006644c6a1{9P.session} n=000000007a8030ee
	[  +0.001187] FS-Cache: N-key=[10] '34333231303139373837'
	[Dec22 00:03] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 00:40:52 up 1 day,  7:23,  0 user,  load average: 0.09, 0.17, 0.50
	Linux functional-973657 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 22 00:40:48 functional-973657 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 00:40:49 functional-973657 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 22 00:40:49 functional-973657 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:40:49 functional-973657 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:40:49 functional-973657 kubelet[21003]: E1222 00:40:49.793446   21003 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 00:40:49 functional-973657 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 00:40:49 functional-973657 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 00:40:50 functional-973657 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 22 00:40:50 functional-973657 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:40:50 functional-973657 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:40:50 functional-973657 kubelet[21033]: E1222 00:40:50.563599   21033 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 00:40:50 functional-973657 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 00:40:50 functional-973657 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 00:40:51 functional-973657 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Dec 22 00:40:51 functional-973657 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:40:51 functional-973657 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:40:51 functional-973657 kubelet[21069]: E1222 00:40:51.318812   21069 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 00:40:51 functional-973657 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 00:40:51 functional-973657 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 00:40:51 functional-973657 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 324.
	Dec 22 00:40:51 functional-973657 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:40:51 functional-973657 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:40:52 functional-973657 kubelet[21153]: E1222 00:40:52.053776   21153 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 00:40:52 functional-973657 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 00:40:52 functional-973657 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-973657 -n functional-973657
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-973657 -n functional-973657: exit status 2 (369.490342ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-973657" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth (2.21s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-973657 apply -f testdata/invalidsvc.yaml
functional_test.go:2326: (dbg) Non-zero exit: kubectl --context functional-973657 apply -f testdata/invalidsvc.yaml: exit status 1 (58.296546ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/invalidsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test.go:2328: kubectl --context functional-973657 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd (1.76s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-973657 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-973657 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-973657 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-973657 --alsologtostderr -v=1] stderr:
I1222 00:43:20.635193 1463866 out.go:360] Setting OutFile to fd 1 ...
I1222 00:43:20.635310 1463866 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1222 00:43:20.635324 1463866 out.go:374] Setting ErrFile to fd 2...
I1222 00:43:20.635330 1463866 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1222 00:43:20.635593 1463866 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1395000/.minikube/bin
I1222 00:43:20.635864 1463866 mustload.go:66] Loading cluster: functional-973657
I1222 00:43:20.636287 1463866 config.go:182] Loaded profile config "functional-973657": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
I1222 00:43:20.636746 1463866 cli_runner.go:164] Run: docker container inspect functional-973657 --format={{.State.Status}}
I1222 00:43:20.653697 1463866 host.go:66] Checking if "functional-973657" exists ...
I1222 00:43:20.654024 1463866 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1222 00:43:20.711227 1463866 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 00:43:20.70118023 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:
/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1222 00:43:20.711363 1463866 api_server.go:166] Checking apiserver status ...
I1222 00:43:20.711433 1463866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1222 00:43:20.711475 1463866 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
I1222 00:43:20.729478 1463866 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38390 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/functional-973657/id_rsa Username:docker}
W1222 00:43:20.842047 1463866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1222 00:43:20.845469 1463866 out.go:179] * The control-plane node functional-973657 apiserver is not running: (state=Stopped)
I1222 00:43:20.848459 1463866 out.go:179]   To start a cluster, run: "minikube start -p functional-973657"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-973657
helpers_test.go:244: (dbg) docker inspect functional-973657:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "66803363da2c02a83814dda1c0764d3abdab5acc630ac08f6a997102221d51a1",
	        "Created": "2025-12-22T00:13:58.968084222Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1435135,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-22T00:13:59.032592154Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:065a636b8735485f57df2b02ed6532902f189a9c5dc304ae0ae68a778e1c9b2c",
	        "ResolvConfPath": "/var/lib/docker/containers/66803363da2c02a83814dda1c0764d3abdab5acc630ac08f6a997102221d51a1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/66803363da2c02a83814dda1c0764d3abdab5acc630ac08f6a997102221d51a1/hostname",
	        "HostsPath": "/var/lib/docker/containers/66803363da2c02a83814dda1c0764d3abdab5acc630ac08f6a997102221d51a1/hosts",
	        "LogPath": "/var/lib/docker/containers/66803363da2c02a83814dda1c0764d3abdab5acc630ac08f6a997102221d51a1/66803363da2c02a83814dda1c0764d3abdab5acc630ac08f6a997102221d51a1-json.log",
	        "Name": "/functional-973657",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-973657:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-973657",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "66803363da2c02a83814dda1c0764d3abdab5acc630ac08f6a997102221d51a1",
	                "LowerDir": "/var/lib/docker/overlay2/5e1ad55fc7940958673405b2a5d9d7701d300a0b94ebc0c871b8eb28331634c7-init/diff:/var/lib/docker/overlay2/fdfc0dccbb3b6766b7981a5669907f62e120bbe774767ccbee34e6115374625f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5e1ad55fc7940958673405b2a5d9d7701d300a0b94ebc0c871b8eb28331634c7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5e1ad55fc7940958673405b2a5d9d7701d300a0b94ebc0c871b8eb28331634c7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5e1ad55fc7940958673405b2a5d9d7701d300a0b94ebc0c871b8eb28331634c7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-973657",
	                "Source": "/var/lib/docker/volumes/functional-973657/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-973657",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-973657",
	                "name.minikube.sigs.k8s.io": "functional-973657",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9a7415dc7cc8c69d402ee20ae768f9dea6a8f4f19a78dc21532ae8d42f3e7899",
	            "SandboxKey": "/var/run/docker/netns/9a7415dc7cc8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38390"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38391"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38394"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38392"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38393"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-973657": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ee:06:b1:ad:4a:31",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1e42e4d66b505bfd04d5446c52717be20340997733545e1d4203ef38f80c0dbb",
	                    "EndpointID": "cfb1e4a2d5409c3c16cc85466e1253884fb8124967c81f53a3b06011e2792928",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-973657",
	                        "66803363da2c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-973657 -n functional-973657
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-973657 -n functional-973657: exit status 2 (332.945573ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ service   │ functional-973657 service hello-node --url                                                                                                          │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │                     │
	│ ssh       │ functional-973657 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │                     │
	│ mount     │ -p functional-973657 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun414748394/001:/mount-9p --alsologtostderr -v=1               │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │                     │
	│ ssh       │ functional-973657 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │ 22 Dec 25 00:43 UTC │
	│ ssh       │ functional-973657 ssh -- ls -la /mount-9p                                                                                                           │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │ 22 Dec 25 00:43 UTC │
	│ ssh       │ functional-973657 ssh cat /mount-9p/test-1766364190735391703                                                                                        │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │ 22 Dec 25 00:43 UTC │
	│ ssh       │ functional-973657 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                    │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │                     │
	│ ssh       │ functional-973657 ssh sudo umount -f /mount-9p                                                                                                      │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │ 22 Dec 25 00:43 UTC │
	│ mount     │ -p functional-973657 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun4291673907/001:/mount-9p --alsologtostderr -v=1 --port 44921 │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │                     │
	│ ssh       │ functional-973657 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │                     │
	│ ssh       │ functional-973657 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │ 22 Dec 25 00:43 UTC │
	│ ssh       │ functional-973657 ssh -- ls -la /mount-9p                                                                                                           │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │ 22 Dec 25 00:43 UTC │
	│ ssh       │ functional-973657 ssh sudo umount -f /mount-9p                                                                                                      │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │                     │
	│ mount     │ -p functional-973657 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun2375414462/001:/mount1 --alsologtostderr -v=1                │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │                     │
	│ mount     │ -p functional-973657 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun2375414462/001:/mount2 --alsologtostderr -v=1                │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │                     │
	│ ssh       │ functional-973657 ssh findmnt -T /mount1                                                                                                            │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │                     │
	│ mount     │ -p functional-973657 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun2375414462/001:/mount3 --alsologtostderr -v=1                │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │                     │
	│ ssh       │ functional-973657 ssh findmnt -T /mount1                                                                                                            │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │ 22 Dec 25 00:43 UTC │
	│ ssh       │ functional-973657 ssh findmnt -T /mount2                                                                                                            │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │ 22 Dec 25 00:43 UTC │
	│ ssh       │ functional-973657 ssh findmnt -T /mount3                                                                                                            │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │ 22 Dec 25 00:43 UTC │
	│ mount     │ -p functional-973657 --kill=true                                                                                                                    │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │                     │
	│ start     │ -p functional-973657 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1   │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │                     │
	│ start     │ -p functional-973657 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1   │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │                     │
	│ start     │ -p functional-973657 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1             │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-973657 --alsologtostderr -v=1                                                                                      │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │                     │
	└───────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/22 00:43:20
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1222 00:43:20.397088 1463794 out.go:360] Setting OutFile to fd 1 ...
	I1222 00:43:20.397239 1463794 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:43:20.397280 1463794 out.go:374] Setting ErrFile to fd 2...
	I1222 00:43:20.397293 1463794 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:43:20.397557 1463794 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1395000/.minikube/bin
	I1222 00:43:20.397958 1463794 out.go:368] Setting JSON to false
	I1222 00:43:20.398881 1463794 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":113153,"bootTime":1766251047,"procs":162,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1222 00:43:20.398948 1463794 start.go:143] virtualization:  
	I1222 00:43:20.402305 1463794 out.go:179] * [functional-973657] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1222 00:43:20.405324 1463794 out.go:179]   - MINIKUBE_LOCATION=22179
	I1222 00:43:20.405407 1463794 notify.go:221] Checking for updates...
	I1222 00:43:20.411808 1463794 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 00:43:20.414617 1463794 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-1395000/kubeconfig
	I1222 00:43:20.417523 1463794 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1395000/.minikube
	I1222 00:43:20.420394 1463794 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1222 00:43:20.423238 1463794 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 00:43:20.426518 1463794 config.go:182] Loaded profile config "functional-973657": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1222 00:43:20.427149 1463794 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 00:43:20.460983 1463794 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1222 00:43:20.461115 1463794 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 00:43:20.517726 1463794 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 00:43:20.508508606 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 00:43:20.517835 1463794 docker.go:319] overlay module found
	I1222 00:43:20.520954 1463794 out.go:179] * Using the docker driver based on existing profile
	I1222 00:43:20.523767 1463794 start.go:309] selected driver: docker
	I1222 00:43:20.523793 1463794 start.go:928] validating driver "docker" against &{Name:functional-973657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-973657 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 00:43:20.523909 1463794 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 00:43:20.524020 1463794 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 00:43:20.583070 1463794 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 00:43:20.573843603 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 00:43:20.583581 1463794 cni.go:84] Creating CNI manager for ""
	I1222 00:43:20.583648 1463794 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1222 00:43:20.583700 1463794 start.go:353] cluster config:
	{Name:functional-973657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-973657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCo
reDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 00:43:20.586814 1463794 out.go:179] * dry-run validation complete!
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.248656814Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.248726812Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.248818752Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.248887126Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.248959487Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.249024218Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.249082229Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.249153910Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.249223252Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.249308890Z" level=info msg="Connect containerd service"
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.249702304Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.252215911Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.272726589Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.273135610Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.272971801Z" level=info msg="Start subscribing containerd event"
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.273361942Z" level=info msg="Start recovering state"
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.330860881Z" level=info msg="Start event monitor"
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.331048714Z" level=info msg="Start cni network conf syncer for default"
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.331117121Z" level=info msg="Start streaming server"
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.331184855Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.331242062Z" level=info msg="runtime interface starting up..."
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.331301705Z" level=info msg="starting plugins..."
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.331364582Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.331577047Z" level=info msg="containerd successfully booted in 0.110567s"
	Dec 22 00:28:40 functional-973657 systemd[1]: Started containerd.service - containerd container runtime.
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:43:21.913698   23413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:43:21.914144   23413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:43:21.915848   23413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:43:21.916355   23413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:43:21.917984   23413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec21 21:36] overlayfs: idmapped layers are currently not supported
	[ +34.220408] overlayfs: idmapped layers are currently not supported
	[Dec21 21:37] overlayfs: idmapped layers are currently not supported
	[Dec21 21:38] overlayfs: idmapped layers are currently not supported
	[Dec21 21:39] overlayfs: idmapped layers are currently not supported
	[ +42.728151] overlayfs: idmapped layers are currently not supported
	[Dec21 21:40] overlayfs: idmapped layers are currently not supported
	[Dec21 21:42] overlayfs: idmapped layers are currently not supported
	[Dec21 21:43] overlayfs: idmapped layers are currently not supported
	[Dec21 22:01] overlayfs: idmapped layers are currently not supported
	[Dec21 22:03] overlayfs: idmapped layers are currently not supported
	[Dec21 22:04] overlayfs: idmapped layers are currently not supported
	[Dec21 22:06] overlayfs: idmapped layers are currently not supported
	[Dec21 22:07] overlayfs: idmapped layers are currently not supported
	[Dec21 22:09] kauditd_printk_skb: 8 callbacks suppressed
	[Dec21 22:19] FS-Cache: Duplicate cookie detected
	[  +0.000799] FS-Cache: O-cookie c=000001b7 [p=00000002 fl=222 nc=0 na=1]
	[  +0.000997] FS-Cache: O-cookie d=000000006644c6a1{9P.session} n=0000000059d48210
	[  +0.001156] FS-Cache: O-key=[10] '34333231303139373837'
	[  +0.000780] FS-Cache: N-cookie c=000001b8 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000956] FS-Cache: N-cookie d=000000006644c6a1{9P.session} n=000000007a8030ee
	[  +0.001187] FS-Cache: N-key=[10] '34333231303139373837'
	[Dec22 00:03] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 00:43:21 up 1 day,  7:25,  0 user,  load average: 0.66, 0.33, 0.51
	Linux functional-973657 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 22 00:43:18 functional-973657 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 00:43:19 functional-973657 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 520.
	Dec 22 00:43:19 functional-973657 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:43:19 functional-973657 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:43:19 functional-973657 kubelet[23274]: E1222 00:43:19.300311   23274 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 00:43:19 functional-973657 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 00:43:19 functional-973657 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 00:43:19 functional-973657 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 521.
	Dec 22 00:43:19 functional-973657 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:43:19 functional-973657 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:43:20 functional-973657 kubelet[23295]: E1222 00:43:20.063296   23295 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 00:43:20 functional-973657 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 00:43:20 functional-973657 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 00:43:20 functional-973657 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 522.
	Dec 22 00:43:20 functional-973657 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:43:20 functional-973657 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:43:20 functional-973657 kubelet[23302]: E1222 00:43:20.798446   23302 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 00:43:20 functional-973657 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 00:43:20 functional-973657 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 00:43:21 functional-973657 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 523.
	Dec 22 00:43:21 functional-973657 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:43:21 functional-973657 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:43:21 functional-973657 kubelet[23331]: E1222 00:43:21.554186   23331 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 00:43:21 functional-973657 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 00:43:21 functional-973657 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-973657 -n functional-973657
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-973657 -n functional-973657: exit status 2 (338.366655ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-973657" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd (1.76s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd (3.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 status
functional_test.go:869: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-973657 status: exit status 2 (336.832273ms)

                                                
                                                
-- stdout --
	functional-973657
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
functional_test.go:871: failed to run minikube status. args "out/minikube-linux-arm64 -p functional-973657 status" : exit status 2
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:875: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-973657 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 2 (315.490668ms)

                                                
                                                
-- stdout --
	host:Running,kublet:Stopped,apiserver:Stopped,kubeconfig:Configured

                                                
                                                
-- /stdout --
functional_test.go:877: failed to run minikube status with custom format: args "out/minikube-linux-arm64 -p functional-973657 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 2
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 status -o json
functional_test.go:887: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-973657 status -o json: exit status 2 (311.669446ms)

                                                
                                                
-- stdout --
	{"Name":"functional-973657","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:889: failed to run minikube status with json output. args "out/minikube-linux-arm64 -p functional-973657 status -o json" : exit status 2
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-973657
helpers_test.go:244: (dbg) docker inspect functional-973657:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "66803363da2c02a83814dda1c0764d3abdab5acc630ac08f6a997102221d51a1",
	        "Created": "2025-12-22T00:13:58.968084222Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1435135,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-22T00:13:59.032592154Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:065a636b8735485f57df2b02ed6532902f189a9c5dc304ae0ae68a778e1c9b2c",
	        "ResolvConfPath": "/var/lib/docker/containers/66803363da2c02a83814dda1c0764d3abdab5acc630ac08f6a997102221d51a1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/66803363da2c02a83814dda1c0764d3abdab5acc630ac08f6a997102221d51a1/hostname",
	        "HostsPath": "/var/lib/docker/containers/66803363da2c02a83814dda1c0764d3abdab5acc630ac08f6a997102221d51a1/hosts",
	        "LogPath": "/var/lib/docker/containers/66803363da2c02a83814dda1c0764d3abdab5acc630ac08f6a997102221d51a1/66803363da2c02a83814dda1c0764d3abdab5acc630ac08f6a997102221d51a1-json.log",
	        "Name": "/functional-973657",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-973657:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-973657",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "66803363da2c02a83814dda1c0764d3abdab5acc630ac08f6a997102221d51a1",
	                "LowerDir": "/var/lib/docker/overlay2/5e1ad55fc7940958673405b2a5d9d7701d300a0b94ebc0c871b8eb28331634c7-init/diff:/var/lib/docker/overlay2/fdfc0dccbb3b6766b7981a5669907f62e120bbe774767ccbee34e6115374625f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5e1ad55fc7940958673405b2a5d9d7701d300a0b94ebc0c871b8eb28331634c7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5e1ad55fc7940958673405b2a5d9d7701d300a0b94ebc0c871b8eb28331634c7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5e1ad55fc7940958673405b2a5d9d7701d300a0b94ebc0c871b8eb28331634c7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-973657",
	                "Source": "/var/lib/docker/volumes/functional-973657/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-973657",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-973657",
	                "name.minikube.sigs.k8s.io": "functional-973657",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9a7415dc7cc8c69d402ee20ae768f9dea6a8f4f19a78dc21532ae8d42f3e7899",
	            "SandboxKey": "/var/run/docker/netns/9a7415dc7cc8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38390"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38391"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38394"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38392"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38393"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-973657": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ee:06:b1:ad:4a:31",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1e42e4d66b505bfd04d5446c52717be20340997733545e1d4203ef38f80c0dbb",
	                    "EndpointID": "cfb1e4a2d5409c3c16cc85466e1253884fb8124967c81f53a3b06011e2792928",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-973657",
	                        "66803363da2c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-973657 -n functional-973657
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-973657 -n functional-973657: exit status 2 (348.921924ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ service │ functional-973657 service list                                                                                                                      │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │                     │
	│ service │ functional-973657 service list -o json                                                                                                              │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │                     │
	│ service │ functional-973657 service --namespace=default --https --url hello-node                                                                              │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │                     │
	│ service │ functional-973657 service hello-node --url --format={{.IP}}                                                                                         │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │                     │
	│ service │ functional-973657 service hello-node --url                                                                                                          │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │                     │
	│ ssh     │ functional-973657 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │                     │
	│ mount   │ -p functional-973657 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun414748394/001:/mount-9p --alsologtostderr -v=1               │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │                     │
	│ ssh     │ functional-973657 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │ 22 Dec 25 00:43 UTC │
	│ ssh     │ functional-973657 ssh -- ls -la /mount-9p                                                                                                           │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │ 22 Dec 25 00:43 UTC │
	│ ssh     │ functional-973657 ssh cat /mount-9p/test-1766364190735391703                                                                                        │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │ 22 Dec 25 00:43 UTC │
	│ ssh     │ functional-973657 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                    │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │                     │
	│ ssh     │ functional-973657 ssh sudo umount -f /mount-9p                                                                                                      │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │ 22 Dec 25 00:43 UTC │
	│ mount   │ -p functional-973657 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun4291673907/001:/mount-9p --alsologtostderr -v=1 --port 44921 │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │                     │
	│ ssh     │ functional-973657 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │                     │
	│ ssh     │ functional-973657 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │ 22 Dec 25 00:43 UTC │
	│ ssh     │ functional-973657 ssh -- ls -la /mount-9p                                                                                                           │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │ 22 Dec 25 00:43 UTC │
	│ ssh     │ functional-973657 ssh sudo umount -f /mount-9p                                                                                                      │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │                     │
	│ mount   │ -p functional-973657 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun2375414462/001:/mount1 --alsologtostderr -v=1                │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │                     │
	│ mount   │ -p functional-973657 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun2375414462/001:/mount2 --alsologtostderr -v=1                │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │                     │
	│ ssh     │ functional-973657 ssh findmnt -T /mount1                                                                                                            │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │                     │
	│ mount   │ -p functional-973657 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun2375414462/001:/mount3 --alsologtostderr -v=1                │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │                     │
	│ ssh     │ functional-973657 ssh findmnt -T /mount1                                                                                                            │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │ 22 Dec 25 00:43 UTC │
	│ ssh     │ functional-973657 ssh findmnt -T /mount2                                                                                                            │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │ 22 Dec 25 00:43 UTC │
	│ ssh     │ functional-973657 ssh findmnt -T /mount3                                                                                                            │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │ 22 Dec 25 00:43 UTC │
	│ mount   │ -p functional-973657 --kill=true                                                                                                                    │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/22 00:28:37
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1222 00:28:37.451822 1446402 out.go:360] Setting OutFile to fd 1 ...
	I1222 00:28:37.451933 1446402 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:28:37.451942 1446402 out.go:374] Setting ErrFile to fd 2...
	I1222 00:28:37.451946 1446402 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:28:37.452197 1446402 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1395000/.minikube/bin
	I1222 00:28:37.453530 1446402 out.go:368] Setting JSON to false
	I1222 00:28:37.454369 1446402 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":112270,"bootTime":1766251047,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1222 00:28:37.454418 1446402 start.go:143] virtualization:  
	I1222 00:28:37.457786 1446402 out.go:179] * [functional-973657] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1222 00:28:37.461618 1446402 out.go:179]   - MINIKUBE_LOCATION=22179
	I1222 00:28:37.461721 1446402 notify.go:221] Checking for updates...
	I1222 00:28:37.467381 1446402 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 00:28:37.470438 1446402 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-1395000/kubeconfig
	I1222 00:28:37.473311 1446402 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1395000/.minikube
	I1222 00:28:37.476105 1446402 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1222 00:28:37.479015 1446402 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 00:28:37.482344 1446402 config.go:182] Loaded profile config "functional-973657": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1222 00:28:37.482442 1446402 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 00:28:37.509513 1446402 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1222 00:28:37.509620 1446402 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 00:28:37.577428 1446402 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-22 00:28:37.567598413 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 00:28:37.577529 1446402 docker.go:319] overlay module found
	I1222 00:28:37.580701 1446402 out.go:179] * Using the docker driver based on existing profile
	I1222 00:28:37.583433 1446402 start.go:309] selected driver: docker
	I1222 00:28:37.583443 1446402 start.go:928] validating driver "docker" against &{Name:functional-973657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-973657 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableC
oreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 00:28:37.583549 1446402 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 00:28:37.583656 1446402 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 00:28:37.637869 1446402 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-22 00:28:37.628834862 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 00:28:37.638333 1446402 start_flags.go:995] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1222 00:28:37.638357 1446402 cni.go:84] Creating CNI manager for ""
	I1222 00:28:37.638411 1446402 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1222 00:28:37.638452 1446402 start.go:353] cluster config:
	{Name:functional-973657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-973657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCo
reDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 00:28:37.641536 1446402 out.go:179] * Starting "functional-973657" primary control-plane node in "functional-973657" cluster
	I1222 00:28:37.644340 1446402 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1222 00:28:37.647258 1446402 out.go:179] * Pulling base image v0.0.48-1766219634-22260 ...
	I1222 00:28:37.650255 1446402 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1222 00:28:37.650391 1446402 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1222 00:28:37.650410 1446402 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4
	I1222 00:28:37.650417 1446402 cache.go:65] Caching tarball of preloaded images
	I1222 00:28:37.650491 1446402 preload.go:251] Found /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1222 00:28:37.650499 1446402 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on containerd
	I1222 00:28:37.650609 1446402 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/config.json ...
	I1222 00:28:37.670527 1446402 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon, skipping pull
	I1222 00:28:37.670540 1446402 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 exists in daemon, skipping load
	I1222 00:28:37.670559 1446402 cache.go:243] Successfully downloaded all kic artifacts
	I1222 00:28:37.670589 1446402 start.go:360] acquireMachinesLock for functional-973657: {Name:mk23c5c2a3abc92310900db50002bad061b76c2b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 00:28:37.670659 1446402 start.go:364] duration metric: took 50.988µs to acquireMachinesLock for "functional-973657"
	I1222 00:28:37.670679 1446402 start.go:96] Skipping create...Using existing machine configuration
	I1222 00:28:37.670683 1446402 fix.go:54] fixHost starting: 
	I1222 00:28:37.670937 1446402 cli_runner.go:164] Run: docker container inspect functional-973657 --format={{.State.Status}}
	I1222 00:28:37.688276 1446402 fix.go:112] recreateIfNeeded on functional-973657: state=Running err=<nil>
	W1222 00:28:37.688299 1446402 fix.go:138] unexpected machine state, will restart: <nil>
	I1222 00:28:37.691627 1446402 out.go:252] * Updating the running docker "functional-973657" container ...
	I1222 00:28:37.691654 1446402 machine.go:94] provisionDockerMachine start ...
	I1222 00:28:37.691736 1446402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
	I1222 00:28:37.709165 1446402 main.go:144] libmachine: Using SSH client type: native
	I1222 00:28:37.709504 1446402 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38390 <nil> <nil>}
	I1222 00:28:37.709511 1446402 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 00:28:37.842221 1446402 main.go:144] libmachine: SSH cmd err, output: <nil>: functional-973657
	
	I1222 00:28:37.842236 1446402 ubuntu.go:182] provisioning hostname "functional-973657"
	I1222 00:28:37.842299 1446402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
	I1222 00:28:37.861944 1446402 main.go:144] libmachine: Using SSH client type: native
	I1222 00:28:37.862401 1446402 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38390 <nil> <nil>}
	I1222 00:28:37.862411 1446402 main.go:144] libmachine: About to run SSH command:
	sudo hostname functional-973657 && echo "functional-973657" | sudo tee /etc/hostname
	I1222 00:28:38.004653 1446402 main.go:144] libmachine: SSH cmd err, output: <nil>: functional-973657
	
	I1222 00:28:38.004757 1446402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
	I1222 00:28:38.029552 1446402 main.go:144] libmachine: Using SSH client type: native
	I1222 00:28:38.029903 1446402 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38390 <nil> <nil>}
	I1222 00:28:38.029921 1446402 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-973657' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-973657/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-973657' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1222 00:28:38.166540 1446402 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 00:28:38.166558 1446402 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22179-1395000/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-1395000/.minikube}
	I1222 00:28:38.166588 1446402 ubuntu.go:190] setting up certificates
	I1222 00:28:38.166605 1446402 provision.go:84] configureAuth start
	I1222 00:28:38.166666 1446402 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-973657
	I1222 00:28:38.184810 1446402 provision.go:143] copyHostCerts
	I1222 00:28:38.184868 1446402 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1395000/.minikube/cert.pem, removing ...
	I1222 00:28:38.184883 1446402 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1395000/.minikube/cert.pem
	I1222 00:28:38.184958 1446402 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-1395000/.minikube/cert.pem (1123 bytes)
	I1222 00:28:38.185063 1446402 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1395000/.minikube/key.pem, removing ...
	I1222 00:28:38.185068 1446402 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1395000/.minikube/key.pem
	I1222 00:28:38.185094 1446402 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-1395000/.minikube/key.pem (1679 bytes)
	I1222 00:28:38.185151 1446402 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.pem, removing ...
	I1222 00:28:38.185154 1446402 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.pem
	I1222 00:28:38.185176 1446402 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.pem (1082 bytes)
	I1222 00:28:38.185228 1446402 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca-key.pem org=jenkins.functional-973657 san=[127.0.0.1 192.168.49.2 functional-973657 localhost minikube]
	I1222 00:28:38.572282 1446402 provision.go:177] copyRemoteCerts
	I1222 00:28:38.572338 1446402 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1222 00:28:38.572378 1446402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
	I1222 00:28:38.590440 1446402 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38390 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/functional-973657/id_rsa Username:docker}
	I1222 00:28:38.686182 1446402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1222 00:28:38.704460 1446402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1222 00:28:38.721777 1446402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1222 00:28:38.739280 1446402 provision.go:87] duration metric: took 572.652959ms to configureAuth
	I1222 00:28:38.739299 1446402 ubuntu.go:206] setting minikube options for container-runtime
	I1222 00:28:38.739484 1446402 config.go:182] Loaded profile config "functional-973657": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1222 00:28:38.739490 1446402 machine.go:97] duration metric: took 1.047830613s to provisionDockerMachine
	I1222 00:28:38.739496 1446402 start.go:293] postStartSetup for "functional-973657" (driver="docker")
	I1222 00:28:38.739506 1446402 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1222 00:28:38.739568 1446402 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1222 00:28:38.739605 1446402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
	I1222 00:28:38.761201 1446402 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38390 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/functional-973657/id_rsa Username:docker}
	I1222 00:28:38.864350 1446402 ssh_runner.go:195] Run: cat /etc/os-release
	I1222 00:28:38.868359 1446402 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1222 00:28:38.868379 1446402 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1222 00:28:38.868390 1446402 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1395000/.minikube/addons for local assets ...
	I1222 00:28:38.868447 1446402 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1395000/.minikube/files for local assets ...
	I1222 00:28:38.868524 1446402 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem -> 13968642.pem in /etc/ssl/certs
	I1222 00:28:38.868598 1446402 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/test/nested/copy/1396864/hosts -> hosts in /etc/test/nested/copy/1396864
	I1222 00:28:38.868641 1446402 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1396864
	I1222 00:28:38.878975 1446402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem --> /etc/ssl/certs/13968642.pem (1708 bytes)
	I1222 00:28:38.897171 1446402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/test/nested/copy/1396864/hosts --> /etc/test/nested/copy/1396864/hosts (40 bytes)
	I1222 00:28:38.915159 1446402 start.go:296] duration metric: took 175.648245ms for postStartSetup
	I1222 00:28:38.915247 1446402 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 00:28:38.915286 1446402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
	I1222 00:28:38.933740 1446402 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38390 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/functional-973657/id_rsa Username:docker}
	I1222 00:28:39.031561 1446402 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1222 00:28:39.036720 1446402 fix.go:56] duration metric: took 1.366028879s for fixHost
	I1222 00:28:39.036736 1446402 start.go:83] releasing machines lock for "functional-973657", held for 1.366069585s
	I1222 00:28:39.036807 1446402 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-973657
	I1222 00:28:39.056063 1446402 ssh_runner.go:195] Run: cat /version.json
	I1222 00:28:39.056131 1446402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
	I1222 00:28:39.056209 1446402 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1222 00:28:39.056284 1446402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
	I1222 00:28:39.084466 1446402 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38390 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/functional-973657/id_rsa Username:docker}
	I1222 00:28:39.086214 1446402 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38390 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/functional-973657/id_rsa Username:docker}
	I1222 00:28:39.182487 1446402 ssh_runner.go:195] Run: systemctl --version
	I1222 00:28:39.277379 1446402 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1222 00:28:39.281860 1446402 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1222 00:28:39.281935 1446402 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1222 00:28:39.290006 1446402 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1222 00:28:39.290021 1446402 start.go:496] detecting cgroup driver to use...
	I1222 00:28:39.290053 1446402 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 00:28:39.290134 1446402 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1222 00:28:39.305829 1446402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1222 00:28:39.319320 1446402 docker.go:218] disabling cri-docker service (if available) ...
	I1222 00:28:39.319374 1446402 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1222 00:28:39.335346 1446402 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1222 00:28:39.349145 1446402 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1222 00:28:39.473478 1446402 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1222 00:28:39.618008 1446402 docker.go:234] disabling docker service ...
	I1222 00:28:39.618090 1446402 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1222 00:28:39.634656 1446402 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1222 00:28:39.647677 1446402 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1222 00:28:39.771400 1446402 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1222 00:28:39.894302 1446402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1222 00:28:39.907014 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 00:28:39.920771 1446402 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1222 00:28:39.929451 1446402 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1222 00:28:39.938829 1446402 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1222 00:28:39.938905 1446402 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1222 00:28:39.947569 1446402 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1222 00:28:39.956482 1446402 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1222 00:28:39.965074 1446402 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1222 00:28:39.973881 1446402 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1222 00:28:39.981977 1446402 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1222 00:28:39.990962 1446402 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1222 00:28:39.999843 1446402 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1222 00:28:40.013571 1446402 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1222 00:28:40.024830 1446402 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1222 00:28:40.034498 1446402 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 00:28:40.154100 1446402 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1222 00:28:40.334682 1446402 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1222 00:28:40.334744 1446402 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1222 00:28:40.338667 1446402 start.go:564] Will wait 60s for crictl version
	I1222 00:28:40.338723 1446402 ssh_runner.go:195] Run: which crictl
	I1222 00:28:40.342335 1446402 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1222 00:28:40.367245 1446402 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.1
	RuntimeApiVersion:  v1
	I1222 00:28:40.367308 1446402 ssh_runner.go:195] Run: containerd --version
	I1222 00:28:40.389012 1446402 ssh_runner.go:195] Run: containerd --version
	I1222 00:28:40.418027 1446402 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.1 ...
	I1222 00:28:40.420898 1446402 cli_runner.go:164] Run: docker network inspect functional-973657 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 00:28:40.437638 1446402 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1222 00:28:40.444854 1446402 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1222 00:28:40.447771 1446402 kubeadm.go:884] updating cluster {Name:functional-973657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-973657 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1222 00:28:40.447915 1446402 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1222 00:28:40.447997 1446402 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 00:28:40.473338 1446402 containerd.go:627] all images are preloaded for containerd runtime.
	I1222 00:28:40.473351 1446402 containerd.go:534] Images already preloaded, skipping extraction
	I1222 00:28:40.473409 1446402 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 00:28:40.498366 1446402 containerd.go:627] all images are preloaded for containerd runtime.
	I1222 00:28:40.498377 1446402 cache_images.go:86] Images are preloaded, skipping loading
	I1222 00:28:40.498383 1446402 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 containerd true true} ...
	I1222 00:28:40.498490 1446402 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-973657 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-973657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1222 00:28:40.498554 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
	I1222 00:28:40.524507 1446402 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1222 00:28:40.524524 1446402 cni.go:84] Creating CNI manager for ""
	I1222 00:28:40.524533 1446402 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1222 00:28:40.524546 1446402 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1222 00:28:40.524568 1446402 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-973657 NodeName:functional-973657 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false Kubelet
ConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1222 00:28:40.524688 1446402 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-973657"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1222 00:28:40.524764 1446402 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1222 00:28:40.533361 1446402 binaries.go:51] Found k8s binaries, skipping transfer
	I1222 00:28:40.533424 1446402 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1222 00:28:40.541244 1446402 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1222 00:28:40.555755 1446402 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1222 00:28:40.568267 1446402 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2085 bytes)
	I1222 00:28:40.581122 1446402 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1222 00:28:40.585058 1446402 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 00:28:40.703120 1446402 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 00:28:40.989767 1446402 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657 for IP: 192.168.49.2
	I1222 00:28:40.989777 1446402 certs.go:195] generating shared ca certs ...
	I1222 00:28:40.989791 1446402 certs.go:227] acquiring lock for ca certs: {Name:mk4e1172e73c8d9b926824a39d7e920772302ed7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:28:40.989935 1446402 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.key
	I1222 00:28:40.989982 1446402 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.key
	I1222 00:28:40.989987 1446402 certs.go:257] generating profile certs ...
	I1222 00:28:40.990067 1446402 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/client.key
	I1222 00:28:40.990138 1446402 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/apiserver.key.ec70d081
	I1222 00:28:40.990175 1446402 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/proxy-client.key
	I1222 00:28:40.990291 1446402 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/1396864.pem (1338 bytes)
	W1222 00:28:40.990321 1446402 certs.go:480] ignoring /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/1396864_empty.pem, impossibly tiny 0 bytes
	I1222 00:28:40.990328 1446402 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca-key.pem (1675 bytes)
	I1222 00:28:40.990354 1446402 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem (1082 bytes)
	I1222 00:28:40.990377 1446402 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem (1123 bytes)
	I1222 00:28:40.990400 1446402 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/key.pem (1679 bytes)
	I1222 00:28:40.990449 1446402 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem (1708 bytes)
	I1222 00:28:40.991096 1446402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1222 00:28:41.014750 1446402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1222 00:28:41.036655 1446402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1222 00:28:41.057901 1446402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1222 00:28:41.075308 1446402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1222 00:28:41.092360 1446402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1222 00:28:41.110513 1446402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1222 00:28:41.128091 1446402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1222 00:28:41.145457 1446402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1222 00:28:41.163271 1446402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/1396864.pem --> /usr/share/ca-certificates/1396864.pem (1338 bytes)
	I1222 00:28:41.181040 1446402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem --> /usr/share/ca-certificates/13968642.pem (1708 bytes)
	I1222 00:28:41.199219 1446402 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1222 00:28:41.211792 1446402 ssh_runner.go:195] Run: openssl version
	I1222 00:28:41.217908 1446402 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:28:41.225276 1446402 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1222 00:28:41.232519 1446402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:28:41.236312 1446402 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 00:04 /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:28:41.236370 1446402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:28:41.277548 1446402 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1222 00:28:41.285110 1446402 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1396864.pem
	I1222 00:28:41.292519 1446402 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1396864.pem /etc/ssl/certs/1396864.pem
	I1222 00:28:41.300133 1446402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1396864.pem
	I1222 00:28:41.304025 1446402 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 00:13 /usr/share/ca-certificates/1396864.pem
	I1222 00:28:41.304090 1446402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1396864.pem
	I1222 00:28:41.345481 1446402 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1222 00:28:41.353129 1446402 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/13968642.pem
	I1222 00:28:41.360704 1446402 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/13968642.pem /etc/ssl/certs/13968642.pem
	I1222 00:28:41.368364 1446402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13968642.pem
	I1222 00:28:41.372067 1446402 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 00:13 /usr/share/ca-certificates/13968642.pem
	I1222 00:28:41.372146 1446402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13968642.pem
	I1222 00:28:41.413233 1446402 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1222 00:28:41.421216 1446402 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 00:28:41.424941 1446402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1222 00:28:41.465845 1446402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1222 00:28:41.509256 1446402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1222 00:28:41.550176 1446402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1222 00:28:41.591240 1446402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1222 00:28:41.636957 1446402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1222 00:28:41.677583 1446402 kubeadm.go:401] StartCluster: {Name:functional-973657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-973657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 00:28:41.677666 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1222 00:28:41.677732 1446402 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 00:28:41.707257 1446402 cri.go:96] found id: ""
	I1222 00:28:41.707323 1446402 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1222 00:28:41.715403 1446402 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1222 00:28:41.715412 1446402 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1222 00:28:41.715487 1446402 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1222 00:28:41.722811 1446402 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1222 00:28:41.723316 1446402 kubeconfig.go:125] found "functional-973657" server: "https://192.168.49.2:8441"
	I1222 00:28:41.724615 1446402 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1222 00:28:41.732758 1446402 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-22 00:14:06.897851329 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-22 00:28:40.577260246 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1222 00:28:41.732777 1446402 kubeadm.go:1161] stopping kube-system containers ...
	I1222 00:28:41.732788 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1222 00:28:41.732853 1446402 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 00:28:41.777317 1446402 cri.go:96] found id: ""
	I1222 00:28:41.777381 1446402 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1222 00:28:41.795672 1446402 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 00:28:41.803787 1446402 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Dec 22 00:18 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec 22 00:18 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5676 Dec 22 00:18 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Dec 22 00:18 /etc/kubernetes/scheduler.conf
	
	I1222 00:28:41.803861 1446402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1222 00:28:41.811861 1446402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1222 00:28:41.819685 1446402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1222 00:28:41.819741 1446402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 00:28:41.827761 1446402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1222 00:28:41.835493 1446402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1222 00:28:41.835553 1446402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 00:28:41.843556 1446402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1222 00:28:41.851531 1446402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1222 00:28:41.851587 1446402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 00:28:41.860145 1446402 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1222 00:28:41.868219 1446402 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1222 00:28:41.913117 1446402 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1222 00:28:43.003962 1446402 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.090816856s)
	I1222 00:28:43.004040 1446402 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1222 00:28:43.212066 1446402 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1222 00:28:43.273727 1446402 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1222 00:28:43.319285 1446402 api_server.go:52] waiting for apiserver process to appear ...
	I1222 00:28:43.319357 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:43.819515 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:44.319574 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:44.820396 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:45.320627 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:45.819505 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:46.320284 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:46.820238 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:47.320289 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:47.819431 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:48.319438 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:48.820203 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:49.320163 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:49.820253 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:50.320340 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:50.820353 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:51.320143 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:51.819557 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:52.319533 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:52.819532 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:53.319872 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:53.819550 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:54.320283 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:54.820042 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:55.319836 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:55.820287 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:56.320324 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:56.819506 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:57.320256 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:57.820518 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:58.320250 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:58.819713 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:59.319563 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:59.820373 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:00.320250 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:00.819558 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:01.320363 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:01.820455 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:02.320264 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:02.820241 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:03.320188 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:03.820211 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:04.319540 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:04.819438 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:05.320247 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:05.820436 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:06.320370 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:06.819539 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:07.319751 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:07.820258 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:08.319764 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:08.820469 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:09.319565 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:09.819562 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:10.319521 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:10.819559 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:11.319690 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:11.819773 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:12.319579 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:12.820346 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:13.320217 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:13.820210 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:14.320172 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:14.819550 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:15.319430 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:15.820196 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:16.319448 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:16.819507 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:17.320526 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:17.819522 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:18.319482 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:18.820476 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:19.319544 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:19.820495 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:20.319558 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:20.820340 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:21.320236 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:21.820518 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:22.319699 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:22.819573 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:23.319567 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:23.819533 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:24.319887 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:24.819624 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:25.320279 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:25.820331 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:26.320411 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:26.819541 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:27.320442 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:27.819562 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:28.319550 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:28.820464 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:29.320504 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:29.819508 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:30.319443 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:30.819528 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:31.319503 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:31.819888 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:32.319676 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:32.819521 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:33.319477 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:33.819820 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:34.319851 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:34.819577 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:35.320381 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:35.820397 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:36.320202 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:36.820411 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:37.319449 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:37.819535 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:38.319499 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:38.820465 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:39.319496 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:39.819562 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:40.319552 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:40.819553 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:41.319757 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:41.820402 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:42.319587 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:42.820218 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:43.320359 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:29:43.320440 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:29:43.346534 1446402 cri.go:96] found id: ""
	I1222 00:29:43.346547 1446402 logs.go:282] 0 containers: []
	W1222 00:29:43.346555 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:29:43.346560 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:29:43.346649 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:29:43.373797 1446402 cri.go:96] found id: ""
	I1222 00:29:43.373813 1446402 logs.go:282] 0 containers: []
	W1222 00:29:43.373820 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:29:43.373825 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:29:43.373887 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:29:43.399270 1446402 cri.go:96] found id: ""
	I1222 00:29:43.399284 1446402 logs.go:282] 0 containers: []
	W1222 00:29:43.399291 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:29:43.399296 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:29:43.399363 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:29:43.423840 1446402 cri.go:96] found id: ""
	I1222 00:29:43.423855 1446402 logs.go:282] 0 containers: []
	W1222 00:29:43.423862 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:29:43.423868 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:29:43.423926 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:29:43.447537 1446402 cri.go:96] found id: ""
	I1222 00:29:43.447551 1446402 logs.go:282] 0 containers: []
	W1222 00:29:43.447558 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:29:43.447564 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:29:43.447626 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:29:43.474001 1446402 cri.go:96] found id: ""
	I1222 00:29:43.474016 1446402 logs.go:282] 0 containers: []
	W1222 00:29:43.474024 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:29:43.474029 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:29:43.474123 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:29:43.502707 1446402 cri.go:96] found id: ""
	I1222 00:29:43.502721 1446402 logs.go:282] 0 containers: []
	W1222 00:29:43.502728 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:29:43.502736 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:29:43.502746 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:29:43.560014 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:29:43.560034 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:29:43.575973 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:29:43.575990 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:29:43.644984 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:29:43.636222   10809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:43.636633   10809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:43.638265   10809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:43.638673   10809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:43.640308   10809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:29:43.636222   10809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:43.636633   10809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:43.638265   10809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:43.638673   10809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:43.640308   10809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:29:43.644996 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:29:43.645007 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:29:43.711821 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:29:43.711841 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:29:46.243876 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:46.255639 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:29:46.255701 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:29:46.285594 1446402 cri.go:96] found id: ""
	I1222 00:29:46.285608 1446402 logs.go:282] 0 containers: []
	W1222 00:29:46.285615 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:29:46.285621 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:29:46.285685 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:29:46.313654 1446402 cri.go:96] found id: ""
	I1222 00:29:46.313669 1446402 logs.go:282] 0 containers: []
	W1222 00:29:46.313676 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:29:46.313694 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:29:46.313755 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:29:46.339799 1446402 cri.go:96] found id: ""
	I1222 00:29:46.339815 1446402 logs.go:282] 0 containers: []
	W1222 00:29:46.339822 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:29:46.339828 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:29:46.339891 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:29:46.365156 1446402 cri.go:96] found id: ""
	I1222 00:29:46.365184 1446402 logs.go:282] 0 containers: []
	W1222 00:29:46.365192 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:29:46.365198 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:29:46.365265 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:29:46.394145 1446402 cri.go:96] found id: ""
	I1222 00:29:46.394159 1446402 logs.go:282] 0 containers: []
	W1222 00:29:46.394167 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:29:46.394172 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:29:46.394233 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:29:46.418776 1446402 cri.go:96] found id: ""
	I1222 00:29:46.418790 1446402 logs.go:282] 0 containers: []
	W1222 00:29:46.418797 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:29:46.418803 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:29:46.418864 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:29:46.442806 1446402 cri.go:96] found id: ""
	I1222 00:29:46.442820 1446402 logs.go:282] 0 containers: []
	W1222 00:29:46.442828 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:29:46.442841 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:29:46.442851 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:29:46.499137 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:29:46.499157 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:29:46.515023 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:29:46.515038 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:29:46.583664 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:29:46.574797   10913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:46.575467   10913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:46.577211   10913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:46.577813   10913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:46.579510   10913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:29:46.574797   10913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:46.575467   10913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:46.577211   10913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:46.577813   10913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:46.579510   10913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:29:46.583675 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:29:46.583687 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:29:46.647550 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:29:46.647569 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:29:49.182538 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:49.192713 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:29:49.192773 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:29:49.216898 1446402 cri.go:96] found id: ""
	I1222 00:29:49.216912 1446402 logs.go:282] 0 containers: []
	W1222 00:29:49.216919 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:29:49.216924 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:29:49.216980 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:29:49.249605 1446402 cri.go:96] found id: ""
	I1222 00:29:49.249618 1446402 logs.go:282] 0 containers: []
	W1222 00:29:49.249626 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:29:49.249631 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:29:49.249690 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:29:49.280524 1446402 cri.go:96] found id: ""
	I1222 00:29:49.280539 1446402 logs.go:282] 0 containers: []
	W1222 00:29:49.280546 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:29:49.280552 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:29:49.280611 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:29:49.311301 1446402 cri.go:96] found id: ""
	I1222 00:29:49.311315 1446402 logs.go:282] 0 containers: []
	W1222 00:29:49.311323 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:29:49.311327 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:29:49.311385 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:29:49.336538 1446402 cri.go:96] found id: ""
	I1222 00:29:49.336551 1446402 logs.go:282] 0 containers: []
	W1222 00:29:49.336559 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:29:49.336564 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:29:49.336624 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:29:49.364232 1446402 cri.go:96] found id: ""
	I1222 00:29:49.364247 1446402 logs.go:282] 0 containers: []
	W1222 00:29:49.364256 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:29:49.364262 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:29:49.364326 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:29:49.388613 1446402 cri.go:96] found id: ""
	I1222 00:29:49.388638 1446402 logs.go:282] 0 containers: []
	W1222 00:29:49.388646 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:29:49.388654 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:29:49.388664 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:29:49.451680 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:29:49.443682   11010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:49.444280   11010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:49.445741   11010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:49.446220   11010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:49.447728   11010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:29:49.443682   11010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:49.444280   11010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:49.445741   11010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:49.446220   11010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:49.447728   11010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:29:49.451690 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:29:49.451701 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:29:49.514558 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:29:49.514577 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:29:49.543077 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:29:49.543095 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:29:49.600979 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:29:49.600997 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:29:52.116977 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:52.127516 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:29:52.127578 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:29:52.154761 1446402 cri.go:96] found id: ""
	I1222 00:29:52.154783 1446402 logs.go:282] 0 containers: []
	W1222 00:29:52.154790 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:29:52.154796 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:29:52.154857 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:29:52.180288 1446402 cri.go:96] found id: ""
	I1222 00:29:52.180303 1446402 logs.go:282] 0 containers: []
	W1222 00:29:52.180310 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:29:52.180316 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:29:52.180376 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:29:52.208439 1446402 cri.go:96] found id: ""
	I1222 00:29:52.208454 1446402 logs.go:282] 0 containers: []
	W1222 00:29:52.208461 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:29:52.208466 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:29:52.208527 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:29:52.233901 1446402 cri.go:96] found id: ""
	I1222 00:29:52.233914 1446402 logs.go:282] 0 containers: []
	W1222 00:29:52.233932 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:29:52.233938 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:29:52.234004 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:29:52.269797 1446402 cri.go:96] found id: ""
	I1222 00:29:52.269821 1446402 logs.go:282] 0 containers: []
	W1222 00:29:52.269829 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:29:52.269835 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:29:52.269901 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:29:52.297204 1446402 cri.go:96] found id: ""
	I1222 00:29:52.297219 1446402 logs.go:282] 0 containers: []
	W1222 00:29:52.297236 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:29:52.297242 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:29:52.297308 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:29:52.326411 1446402 cri.go:96] found id: ""
	I1222 00:29:52.326425 1446402 logs.go:282] 0 containers: []
	W1222 00:29:52.326433 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:29:52.326440 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:29:52.326450 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:29:52.387688 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:29:52.379564   11112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:52.380283   11112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:52.381856   11112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:52.382478   11112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:52.383956   11112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:29:52.379564   11112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:52.380283   11112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:52.381856   11112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:52.382478   11112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:52.383956   11112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:29:52.387700 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:29:52.387716 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:29:52.453506 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:29:52.453524 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:29:52.483252 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:29:52.483269 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:29:52.540786 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:29:52.540804 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:29:55.056509 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:55.067103 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:29:55.067178 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:29:55.093620 1446402 cri.go:96] found id: ""
	I1222 00:29:55.093649 1446402 logs.go:282] 0 containers: []
	W1222 00:29:55.093656 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:29:55.093663 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:29:55.093734 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:29:55.128411 1446402 cri.go:96] found id: ""
	I1222 00:29:55.128424 1446402 logs.go:282] 0 containers: []
	W1222 00:29:55.128432 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:29:55.128436 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:29:55.128504 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:29:55.154633 1446402 cri.go:96] found id: ""
	I1222 00:29:55.154646 1446402 logs.go:282] 0 containers: []
	W1222 00:29:55.154654 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:29:55.154659 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:29:55.154730 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:29:55.181169 1446402 cri.go:96] found id: ""
	I1222 00:29:55.181184 1446402 logs.go:282] 0 containers: []
	W1222 00:29:55.181191 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:29:55.181197 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:29:55.181256 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:29:55.206353 1446402 cri.go:96] found id: ""
	I1222 00:29:55.206367 1446402 logs.go:282] 0 containers: []
	W1222 00:29:55.206374 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:29:55.206379 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:29:55.206439 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:29:55.234930 1446402 cri.go:96] found id: ""
	I1222 00:29:55.234963 1446402 logs.go:282] 0 containers: []
	W1222 00:29:55.234971 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:29:55.234977 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:29:55.235052 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:29:55.269275 1446402 cri.go:96] found id: ""
	I1222 00:29:55.269290 1446402 logs.go:282] 0 containers: []
	W1222 00:29:55.269298 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:29:55.269306 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:29:55.269316 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:29:55.332423 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:29:55.332442 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:29:55.348393 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:29:55.348409 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:29:55.411746 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:29:55.403223   11224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:55.403831   11224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:55.405510   11224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:55.406036   11224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:55.407668   11224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:29:55.403223   11224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:55.403831   11224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:55.405510   11224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:55.406036   11224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:55.407668   11224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:29:55.411756 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:29:55.411767 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:29:55.478898 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:29:55.478918 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:29:58.007945 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:58.028590 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:29:58.028654 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:29:58.053263 1446402 cri.go:96] found id: ""
	I1222 00:29:58.053277 1446402 logs.go:282] 0 containers: []
	W1222 00:29:58.053284 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:29:58.053290 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:29:58.053349 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:29:58.078650 1446402 cri.go:96] found id: ""
	I1222 00:29:58.078664 1446402 logs.go:282] 0 containers: []
	W1222 00:29:58.078671 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:29:58.078676 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:29:58.078746 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:29:58.104284 1446402 cri.go:96] found id: ""
	I1222 00:29:58.104298 1446402 logs.go:282] 0 containers: []
	W1222 00:29:58.104305 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:29:58.104310 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:29:58.104372 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:29:58.133078 1446402 cri.go:96] found id: ""
	I1222 00:29:58.133103 1446402 logs.go:282] 0 containers: []
	W1222 00:29:58.133110 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:29:58.133116 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:29:58.133194 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:29:58.160079 1446402 cri.go:96] found id: ""
	I1222 00:29:58.160092 1446402 logs.go:282] 0 containers: []
	W1222 00:29:58.160100 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:29:58.160105 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:29:58.160209 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:29:58.184050 1446402 cri.go:96] found id: ""
	I1222 00:29:58.184070 1446402 logs.go:282] 0 containers: []
	W1222 00:29:58.184091 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:29:58.184098 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:29:58.184161 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:29:58.207826 1446402 cri.go:96] found id: ""
	I1222 00:29:58.207840 1446402 logs.go:282] 0 containers: []
	W1222 00:29:58.207847 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:29:58.207854 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:29:58.207864 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:29:58.275859 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:29:58.275886 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:29:58.308307 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:29:58.308324 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:29:58.365952 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:29:58.365971 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:29:58.381771 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:29:58.381788 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:29:58.449730 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:29:58.441326   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:58.442009   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:58.443754   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:58.444437   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:58.446028   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:29:58.441326   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:58.442009   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:58.443754   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:58.444437   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:58.446028   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:30:00.951841 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:30:00.968627 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:30:00.968704 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:30:01.017629 1446402 cri.go:96] found id: ""
	I1222 00:30:01.017648 1446402 logs.go:282] 0 containers: []
	W1222 00:30:01.017657 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:30:01.017665 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:30:01.017745 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:30:01.052801 1446402 cri.go:96] found id: ""
	I1222 00:30:01.052819 1446402 logs.go:282] 0 containers: []
	W1222 00:30:01.052829 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:30:01.052837 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:30:01.052908 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:30:01.090908 1446402 cri.go:96] found id: ""
	I1222 00:30:01.090924 1446402 logs.go:282] 0 containers: []
	W1222 00:30:01.090942 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:30:01.090949 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:30:01.091024 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:30:01.135566 1446402 cri.go:96] found id: ""
	I1222 00:30:01.135584 1446402 logs.go:282] 0 containers: []
	W1222 00:30:01.135592 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:30:01.135599 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:30:01.135681 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:30:01.183704 1446402 cri.go:96] found id: ""
	I1222 00:30:01.183720 1446402 logs.go:282] 0 containers: []
	W1222 00:30:01.183728 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:30:01.183734 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:30:01.183803 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:30:01.237284 1446402 cri.go:96] found id: ""
	I1222 00:30:01.237300 1446402 logs.go:282] 0 containers: []
	W1222 00:30:01.237315 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:30:01.237321 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:30:01.237397 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:30:01.274702 1446402 cri.go:96] found id: ""
	I1222 00:30:01.274719 1446402 logs.go:282] 0 containers: []
	W1222 00:30:01.274727 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:30:01.274735 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:30:01.274746 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:30:01.337817 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:30:01.337838 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:30:01.357916 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:30:01.357936 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:30:01.439644 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:30:01.426011   11434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:01.426767   11434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:01.428959   11434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:01.433129   11434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:01.433823   11434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:30:01.426011   11434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:01.426767   11434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:01.428959   11434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:01.433129   11434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:01.433823   11434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:30:01.439657 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:30:01.439672 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:30:01.506150 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:30:01.506173 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:30:04.047348 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:30:04.057922 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:30:04.057990 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:30:04.083599 1446402 cri.go:96] found id: ""
	I1222 00:30:04.083613 1446402 logs.go:282] 0 containers: []
	W1222 00:30:04.083620 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:30:04.083625 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:30:04.083697 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:30:04.109159 1446402 cri.go:96] found id: ""
	I1222 00:30:04.109174 1446402 logs.go:282] 0 containers: []
	W1222 00:30:04.109181 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:30:04.109186 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:30:04.109245 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:30:04.138314 1446402 cri.go:96] found id: ""
	I1222 00:30:04.138329 1446402 logs.go:282] 0 containers: []
	W1222 00:30:04.138336 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:30:04.138344 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:30:04.138405 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:30:04.164036 1446402 cri.go:96] found id: ""
	I1222 00:30:04.164051 1446402 logs.go:282] 0 containers: []
	W1222 00:30:04.164058 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:30:04.164078 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:30:04.164143 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:30:04.189566 1446402 cri.go:96] found id: ""
	I1222 00:30:04.189581 1446402 logs.go:282] 0 containers: []
	W1222 00:30:04.189588 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:30:04.189593 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:30:04.189657 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:30:04.214647 1446402 cri.go:96] found id: ""
	I1222 00:30:04.214662 1446402 logs.go:282] 0 containers: []
	W1222 00:30:04.214669 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:30:04.214675 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:30:04.214746 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:30:04.243657 1446402 cri.go:96] found id: ""
	I1222 00:30:04.243672 1446402 logs.go:282] 0 containers: []
	W1222 00:30:04.243680 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:30:04.243687 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:30:04.243700 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:30:04.312395 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:30:04.312414 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:30:04.342163 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:30:04.342181 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:30:04.399936 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:30:04.399958 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:30:04.416847 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:30:04.416863 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:30:04.482794 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:30:04.473925   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:04.474511   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:04.476052   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:04.476651   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:04.478320   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:30:04.473925   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:04.474511   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:04.476052   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:04.476651   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:04.478320   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:30:06.983066 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:30:06.993652 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:30:06.993715 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:30:07.023165 1446402 cri.go:96] found id: ""
	I1222 00:30:07.023180 1446402 logs.go:282] 0 containers: []
	W1222 00:30:07.023187 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:30:07.023192 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:30:07.023255 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:30:07.049538 1446402 cri.go:96] found id: ""
	I1222 00:30:07.049552 1446402 logs.go:282] 0 containers: []
	W1222 00:30:07.049560 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:30:07.049565 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:30:07.049629 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:30:07.075257 1446402 cri.go:96] found id: ""
	I1222 00:30:07.075277 1446402 logs.go:282] 0 containers: []
	W1222 00:30:07.075284 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:30:07.075289 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:30:07.075351 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:30:07.101441 1446402 cri.go:96] found id: ""
	I1222 00:30:07.101456 1446402 logs.go:282] 0 containers: []
	W1222 00:30:07.101463 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:30:07.101469 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:30:07.101532 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:30:07.128366 1446402 cri.go:96] found id: ""
	I1222 00:30:07.128380 1446402 logs.go:282] 0 containers: []
	W1222 00:30:07.128392 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:30:07.128398 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:30:07.128460 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:30:07.152988 1446402 cri.go:96] found id: ""
	I1222 00:30:07.153005 1446402 logs.go:282] 0 containers: []
	W1222 00:30:07.153013 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:30:07.153019 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:30:07.153079 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:30:07.178387 1446402 cri.go:96] found id: ""
	I1222 00:30:07.178401 1446402 logs.go:282] 0 containers: []
	W1222 00:30:07.178409 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:30:07.178428 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:30:07.178440 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:30:07.194549 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:30:07.194566 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:30:07.271952 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:30:07.261354   11636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:07.262317   11636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:07.264582   11636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:07.265158   11636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:07.267664   11636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:30:07.261354   11636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:07.262317   11636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:07.264582   11636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:07.265158   11636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:07.267664   11636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:30:07.271961 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:30:07.271973 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:30:07.346114 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:30:07.346134 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:30:07.373577 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:30:07.373593 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:30:09.930306 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:30:09.940949 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:30:09.941017 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:30:09.968763 1446402 cri.go:96] found id: ""
	I1222 00:30:09.968777 1446402 logs.go:282] 0 containers: []
	W1222 00:30:09.968784 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:30:09.968789 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:30:09.968848 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:30:09.992991 1446402 cri.go:96] found id: ""
	I1222 00:30:09.993006 1446402 logs.go:282] 0 containers: []
	W1222 00:30:09.993013 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:30:09.993018 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:30:09.993082 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:30:10.029788 1446402 cri.go:96] found id: ""
	I1222 00:30:10.029804 1446402 logs.go:282] 0 containers: []
	W1222 00:30:10.029811 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:30:10.029817 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:30:10.029886 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:30:10.067395 1446402 cri.go:96] found id: ""
	I1222 00:30:10.067410 1446402 logs.go:282] 0 containers: []
	W1222 00:30:10.067416 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:30:10.067422 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:30:10.067499 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:30:10.095007 1446402 cri.go:96] found id: ""
	I1222 00:30:10.095022 1446402 logs.go:282] 0 containers: []
	W1222 00:30:10.095030 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:30:10.095036 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:30:10.095101 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:30:10.123474 1446402 cri.go:96] found id: ""
	I1222 00:30:10.123495 1446402 logs.go:282] 0 containers: []
	W1222 00:30:10.123503 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:30:10.123509 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:30:10.123573 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:30:10.153420 1446402 cri.go:96] found id: ""
	I1222 00:30:10.153435 1446402 logs.go:282] 0 containers: []
	W1222 00:30:10.153441 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:30:10.153448 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:30:10.153459 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:30:10.210172 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:30:10.210193 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:30:10.226706 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:30:10.226725 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:30:10.315292 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:30:10.305028   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:10.306157   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:10.307886   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:10.308616   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:10.310308   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:30:10.305028   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:10.306157   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:10.307886   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:10.308616   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:10.310308   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:30:10.315303 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:30:10.315313 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:30:10.383703 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:30:10.383725 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:30:12.913638 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:30:12.925302 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:30:12.925369 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:30:12.950905 1446402 cri.go:96] found id: ""
	I1222 00:30:12.950919 1446402 logs.go:282] 0 containers: []
	W1222 00:30:12.950930 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:30:12.950935 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:30:12.950996 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:30:12.975557 1446402 cri.go:96] found id: ""
	I1222 00:30:12.975587 1446402 logs.go:282] 0 containers: []
	W1222 00:30:12.975596 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:30:12.975609 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:30:12.975679 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:30:13.000143 1446402 cri.go:96] found id: ""
	I1222 00:30:13.000157 1446402 logs.go:282] 0 containers: []
	W1222 00:30:13.000165 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:30:13.000171 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:30:13.000234 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:30:13.026672 1446402 cri.go:96] found id: ""
	I1222 00:30:13.026694 1446402 logs.go:282] 0 containers: []
	W1222 00:30:13.026702 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:30:13.026709 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:30:13.026773 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:30:13.055830 1446402 cri.go:96] found id: ""
	I1222 00:30:13.055846 1446402 logs.go:282] 0 containers: []
	W1222 00:30:13.055854 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:30:13.055859 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:30:13.055923 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:30:13.082359 1446402 cri.go:96] found id: ""
	I1222 00:30:13.082374 1446402 logs.go:282] 0 containers: []
	W1222 00:30:13.082382 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:30:13.082387 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:30:13.082449 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:30:13.108828 1446402 cri.go:96] found id: ""
	I1222 00:30:13.108842 1446402 logs.go:282] 0 containers: []
	W1222 00:30:13.108850 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:30:13.108858 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:30:13.108869 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:30:13.165350 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:30:13.165373 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:30:13.181480 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:30:13.181497 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:30:13.246107 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:30:13.236679   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:13.237365   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:13.239274   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:13.239720   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:13.241214   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:30:13.236679   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:13.237365   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:13.239274   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:13.239720   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:13.241214   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:30:13.246118 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:30:13.246128 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:30:13.320470 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:30:13.320490 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:30:15.851791 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:30:15.862330 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:30:15.862391 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:30:15.890336 1446402 cri.go:96] found id: ""
	I1222 00:30:15.890350 1446402 logs.go:282] 0 containers: []
	W1222 00:30:15.890358 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:30:15.890364 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:30:15.890428 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:30:15.917647 1446402 cri.go:96] found id: ""
	I1222 00:30:15.917662 1446402 logs.go:282] 0 containers: []
	W1222 00:30:15.917670 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:30:15.917675 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:30:15.917737 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:30:15.948052 1446402 cri.go:96] found id: ""
	I1222 00:30:15.948074 1446402 logs.go:282] 0 containers: []
	W1222 00:30:15.948083 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:30:15.948089 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:30:15.948155 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:30:15.973080 1446402 cri.go:96] found id: ""
	I1222 00:30:15.973094 1446402 logs.go:282] 0 containers: []
	W1222 00:30:15.973101 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:30:15.973107 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:30:15.973167 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:30:15.998935 1446402 cri.go:96] found id: ""
	I1222 00:30:15.998950 1446402 logs.go:282] 0 containers: []
	W1222 00:30:15.998957 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:30:15.998962 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:30:15.999025 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:30:16.027611 1446402 cri.go:96] found id: ""
	I1222 00:30:16.027628 1446402 logs.go:282] 0 containers: []
	W1222 00:30:16.027638 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:30:16.027644 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:30:16.027727 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:30:16.053780 1446402 cri.go:96] found id: ""
	I1222 00:30:16.053794 1446402 logs.go:282] 0 containers: []
	W1222 00:30:16.053802 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:30:16.053809 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:30:16.053823 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:30:16.124007 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:30:16.115168   11946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:16.115642   11946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:16.117413   11946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:16.118187   11946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:16.119870   11946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:30:16.115168   11946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:16.115642   11946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:16.117413   11946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:16.118187   11946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:16.119870   11946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:30:16.124030 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:30:16.124042 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:30:16.186716 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:30:16.186736 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:30:16.216494 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:30:16.216511 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:30:16.279107 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:30:16.279127 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:30:18.798677 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:30:18.809493 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:30:18.809564 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:30:18.835308 1446402 cri.go:96] found id: ""
	I1222 00:30:18.835323 1446402 logs.go:282] 0 containers: []
	W1222 00:30:18.835337 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:30:18.835344 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:30:18.835408 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:30:18.861968 1446402 cri.go:96] found id: ""
	I1222 00:30:18.861982 1446402 logs.go:282] 0 containers: []
	W1222 00:30:18.861989 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:30:18.861995 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:30:18.862052 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:30:18.887230 1446402 cri.go:96] found id: ""
	I1222 00:30:18.887243 1446402 logs.go:282] 0 containers: []
	W1222 00:30:18.887250 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:30:18.887256 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:30:18.887313 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:30:18.912928 1446402 cri.go:96] found id: ""
	I1222 00:30:18.912942 1446402 logs.go:282] 0 containers: []
	W1222 00:30:18.912949 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:30:18.912954 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:30:18.913016 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:30:18.939487 1446402 cri.go:96] found id: ""
	I1222 00:30:18.939501 1446402 logs.go:282] 0 containers: []
	W1222 00:30:18.939509 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:30:18.939514 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:30:18.939578 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:30:18.973342 1446402 cri.go:96] found id: ""
	I1222 00:30:18.973356 1446402 logs.go:282] 0 containers: []
	W1222 00:30:18.973364 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:30:18.973369 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:30:18.973428 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:30:18.997889 1446402 cri.go:96] found id: ""
	I1222 00:30:18.997913 1446402 logs.go:282] 0 containers: []
	W1222 00:30:18.997920 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:30:18.997927 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:30:18.997938 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:30:19.055572 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:30:19.055591 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:30:19.072427 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:30:19.072443 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:30:19.139616 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:30:19.130540   12061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:19.131330   12061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:19.133029   12061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:19.133650   12061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:19.135400   12061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:30:19.130540   12061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:19.131330   12061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:19.133029   12061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:19.133650   12061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:19.135400   12061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:30:19.139628 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:30:19.139638 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:30:19.202678 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:30:19.202697 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:30:21.731757 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:30:21.742262 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:30:21.742322 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:30:21.768714 1446402 cri.go:96] found id: ""
	I1222 00:30:21.768728 1446402 logs.go:282] 0 containers: []
	W1222 00:30:21.768736 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:30:21.768741 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:30:21.768804 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:30:21.799253 1446402 cri.go:96] found id: ""
	I1222 00:30:21.799269 1446402 logs.go:282] 0 containers: []
	W1222 00:30:21.799276 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:30:21.799283 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:30:21.799344 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:30:21.824941 1446402 cri.go:96] found id: ""
	I1222 00:30:21.824963 1446402 logs.go:282] 0 containers: []
	W1222 00:30:21.824970 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:30:21.824975 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:30:21.825035 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:30:21.850741 1446402 cri.go:96] found id: ""
	I1222 00:30:21.850755 1446402 logs.go:282] 0 containers: []
	W1222 00:30:21.850762 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:30:21.850767 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:30:21.850829 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:30:21.876572 1446402 cri.go:96] found id: ""
	I1222 00:30:21.876587 1446402 logs.go:282] 0 containers: []
	W1222 00:30:21.876595 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:30:21.876600 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:30:21.876660 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:30:21.902799 1446402 cri.go:96] found id: ""
	I1222 00:30:21.902814 1446402 logs.go:282] 0 containers: []
	W1222 00:30:21.902821 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:30:21.902827 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:30:21.902888 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:30:21.928559 1446402 cri.go:96] found id: ""
	I1222 00:30:21.928573 1446402 logs.go:282] 0 containers: []
	W1222 00:30:21.928580 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:30:21.928587 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:30:21.928597 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:30:21.984144 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:30:21.984164 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:30:22.000384 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:30:22.000402 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:30:22.073778 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:30:22.064391   12166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:22.065145   12166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:22.066873   12166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:22.067425   12166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:22.069099   12166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:30:22.064391   12166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:22.065145   12166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:22.066873   12166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:22.067425   12166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:22.069099   12166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:30:22.073791 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:30:22.073804 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:30:22.146346 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:30:22.146377 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:30:24.676106 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:30:24.687741 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:30:24.687862 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:30:24.714182 1446402 cri.go:96] found id: ""
	I1222 00:30:24.714204 1446402 logs.go:282] 0 containers: []
	W1222 00:30:24.714212 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:30:24.714217 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:30:24.714281 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:30:24.740930 1446402 cri.go:96] found id: ""
	I1222 00:30:24.740944 1446402 logs.go:282] 0 containers: []
	W1222 00:30:24.740951 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:30:24.740957 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:30:24.741018 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:30:24.767599 1446402 cri.go:96] found id: ""
	I1222 00:30:24.767613 1446402 logs.go:282] 0 containers: []
	W1222 00:30:24.767621 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:30:24.767626 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:30:24.767685 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:30:24.792739 1446402 cri.go:96] found id: ""
	I1222 00:30:24.792753 1446402 logs.go:282] 0 containers: []
	W1222 00:30:24.792760 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:30:24.792766 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:30:24.792827 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:30:24.816926 1446402 cri.go:96] found id: ""
	I1222 00:30:24.816940 1446402 logs.go:282] 0 containers: []
	W1222 00:30:24.816948 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:30:24.816953 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:30:24.817012 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:30:24.842765 1446402 cri.go:96] found id: ""
	I1222 00:30:24.842780 1446402 logs.go:282] 0 containers: []
	W1222 00:30:24.842788 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:30:24.842794 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:30:24.842872 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:30:24.869078 1446402 cri.go:96] found id: ""
	I1222 00:30:24.869092 1446402 logs.go:282] 0 containers: []
	W1222 00:30:24.869099 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:30:24.869108 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:30:24.869119 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:30:24.903296 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:30:24.903312 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:30:24.961056 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:30:24.961075 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:30:24.976812 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:30:24.976828 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:30:25.069840 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:30:25.045522   12282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:25.046327   12282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:25.048398   12282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:25.049159   12282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:25.062238   12282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:30:25.045522   12282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:25.046327   12282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:25.048398   12282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:25.049159   12282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:25.062238   12282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:30:25.069853 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:30:25.069866 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:30:27.636563 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:30:27.647100 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:30:27.647166 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:30:27.672723 1446402 cri.go:96] found id: ""
	I1222 00:30:27.672737 1446402 logs.go:282] 0 containers: []
	W1222 00:30:27.672745 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:30:27.672750 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:30:27.672813 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:30:27.702441 1446402 cri.go:96] found id: ""
	I1222 00:30:27.702455 1446402 logs.go:282] 0 containers: []
	W1222 00:30:27.702462 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:30:27.702468 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:30:27.702530 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:30:27.731422 1446402 cri.go:96] found id: ""
	I1222 00:30:27.731436 1446402 logs.go:282] 0 containers: []
	W1222 00:30:27.731443 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:30:27.731448 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:30:27.731509 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:30:27.756265 1446402 cri.go:96] found id: ""
	I1222 00:30:27.756279 1446402 logs.go:282] 0 containers: []
	W1222 00:30:27.756287 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:30:27.756292 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:30:27.756354 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:30:27.779774 1446402 cri.go:96] found id: ""
	I1222 00:30:27.779791 1446402 logs.go:282] 0 containers: []
	W1222 00:30:27.779798 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:30:27.779804 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:30:27.779867 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:30:27.805305 1446402 cri.go:96] found id: ""
	I1222 00:30:27.805320 1446402 logs.go:282] 0 containers: []
	W1222 00:30:27.805327 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:30:27.805333 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:30:27.805396 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:30:27.835772 1446402 cri.go:96] found id: ""
	I1222 00:30:27.835786 1446402 logs.go:282] 0 containers: []
	W1222 00:30:27.835794 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:30:27.835802 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:30:27.835813 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:30:27.851527 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:30:27.851543 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:30:27.917867 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:30:27.909398   12374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:27.909941   12374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:27.911438   12374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:27.911805   12374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:27.913341   12374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:30:27.909398   12374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:27.909941   12374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:27.911438   12374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:27.911805   12374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:27.913341   12374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:30:27.917877 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:30:27.917889 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:30:27.981255 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:30:27.981274 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:30:28.012714 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:30:28.012732 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:30:30.570668 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:30:30.581032 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:30:30.581096 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:30:30.605788 1446402 cri.go:96] found id: ""
	I1222 00:30:30.605801 1446402 logs.go:282] 0 containers: []
	W1222 00:30:30.605809 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:30:30.605816 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:30:30.605878 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:30:30.630263 1446402 cri.go:96] found id: ""
	I1222 00:30:30.630277 1446402 logs.go:282] 0 containers: []
	W1222 00:30:30.630284 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:30:30.630289 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:30:30.630348 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:30:30.655578 1446402 cri.go:96] found id: ""
	I1222 00:30:30.655593 1446402 logs.go:282] 0 containers: []
	W1222 00:30:30.655600 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:30:30.655608 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:30:30.655668 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:30:30.680304 1446402 cri.go:96] found id: ""
	I1222 00:30:30.680319 1446402 logs.go:282] 0 containers: []
	W1222 00:30:30.680326 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:30:30.680332 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:30:30.680390 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:30:30.706799 1446402 cri.go:96] found id: ""
	I1222 00:30:30.706812 1446402 logs.go:282] 0 containers: []
	W1222 00:30:30.706819 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:30:30.706826 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:30:30.706888 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:30:30.732009 1446402 cri.go:96] found id: ""
	I1222 00:30:30.732023 1446402 logs.go:282] 0 containers: []
	W1222 00:30:30.732030 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:30:30.732036 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:30:30.732145 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:30:30.758260 1446402 cri.go:96] found id: ""
	I1222 00:30:30.758274 1446402 logs.go:282] 0 containers: []
	W1222 00:30:30.758282 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:30:30.758289 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:30:30.758302 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:30:30.773937 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:30:30.773955 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:30:30.836710 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:30:30.828393   12483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:30.829211   12483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:30.830850   12483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:30.831199   12483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:30.832774   12483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:30:30.828393   12483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:30.829211   12483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:30.830850   12483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:30.831199   12483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:30.832774   12483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:30:30.836720 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:30:30.836734 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:30:30.898609 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:30:30.898629 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:30:30.926987 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:30:30.927002 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:30:33.488514 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:30:33.500859 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:30:33.500936 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:30:33.534647 1446402 cri.go:96] found id: ""
	I1222 00:30:33.534662 1446402 logs.go:282] 0 containers: []
	W1222 00:30:33.534669 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:30:33.534675 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:30:33.534740 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:30:33.567528 1446402 cri.go:96] found id: ""
	I1222 00:30:33.567542 1446402 logs.go:282] 0 containers: []
	W1222 00:30:33.567550 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:30:33.567556 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:30:33.567619 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:30:33.592756 1446402 cri.go:96] found id: ""
	I1222 00:30:33.592770 1446402 logs.go:282] 0 containers: []
	W1222 00:30:33.592777 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:30:33.592783 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:30:33.592843 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:30:33.618141 1446402 cri.go:96] found id: ""
	I1222 00:30:33.618155 1446402 logs.go:282] 0 containers: []
	W1222 00:30:33.618162 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:30:33.618169 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:30:33.618229 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:30:33.643676 1446402 cri.go:96] found id: ""
	I1222 00:30:33.643690 1446402 logs.go:282] 0 containers: []
	W1222 00:30:33.643697 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:30:33.643702 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:30:33.643766 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:30:33.675007 1446402 cri.go:96] found id: ""
	I1222 00:30:33.675022 1446402 logs.go:282] 0 containers: []
	W1222 00:30:33.675029 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:30:33.675035 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:30:33.675096 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:30:33.701088 1446402 cri.go:96] found id: ""
	I1222 00:30:33.701104 1446402 logs.go:282] 0 containers: []
	W1222 00:30:33.701112 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:30:33.701119 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:30:33.701130 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:30:33.757879 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:30:33.757898 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:30:33.773857 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:30:33.773873 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:30:33.838724 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:30:33.830022   12590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:33.830661   12590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:33.832486   12590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:33.833021   12590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:33.834672   12590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:30:33.830022   12590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:33.830661   12590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:33.832486   12590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:33.833021   12590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:33.834672   12590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:30:33.838735 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:30:33.838745 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:30:33.901316 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:30:33.901336 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:30:36.433582 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:30:36.443819 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:30:36.443881 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:30:36.467506 1446402 cri.go:96] found id: ""
	I1222 00:30:36.467521 1446402 logs.go:282] 0 containers: []
	W1222 00:30:36.467528 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:30:36.467534 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:30:36.467596 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:30:36.502511 1446402 cri.go:96] found id: ""
	I1222 00:30:36.502525 1446402 logs.go:282] 0 containers: []
	W1222 00:30:36.502532 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:30:36.502538 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:30:36.502596 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:30:36.528768 1446402 cri.go:96] found id: ""
	I1222 00:30:36.528782 1446402 logs.go:282] 0 containers: []
	W1222 00:30:36.528789 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:30:36.528795 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:30:36.528856 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:30:36.563520 1446402 cri.go:96] found id: ""
	I1222 00:30:36.563534 1446402 logs.go:282] 0 containers: []
	W1222 00:30:36.563552 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:30:36.563558 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:30:36.563625 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:30:36.587776 1446402 cri.go:96] found id: ""
	I1222 00:30:36.587791 1446402 logs.go:282] 0 containers: []
	W1222 00:30:36.587798 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:30:36.587803 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:30:36.587870 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:30:36.613760 1446402 cri.go:96] found id: ""
	I1222 00:30:36.613774 1446402 logs.go:282] 0 containers: []
	W1222 00:30:36.613781 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:30:36.613786 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:30:36.613846 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:30:36.638515 1446402 cri.go:96] found id: ""
	I1222 00:30:36.638529 1446402 logs.go:282] 0 containers: []
	W1222 00:30:36.638536 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:30:36.638544 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:30:36.638554 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:30:36.697219 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:30:36.697239 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:30:36.713436 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:30:36.713452 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:30:36.780368 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:30:36.772229   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:36.773020   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:36.774640   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:36.774973   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:36.776479   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:30:36.772229   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:36.773020   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:36.774640   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:36.774973   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:36.776479   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:30:36.780381 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:30:36.780393 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:30:36.842888 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:30:36.842908 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:30:39.372135 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:30:39.382719 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:30:39.382781 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:30:39.408981 1446402 cri.go:96] found id: ""
	I1222 00:30:39.408994 1446402 logs.go:282] 0 containers: []
	W1222 00:30:39.409002 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:30:39.409007 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:30:39.409066 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:30:39.442559 1446402 cri.go:96] found id: ""
	I1222 00:30:39.442573 1446402 logs.go:282] 0 containers: []
	W1222 00:30:39.442581 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:30:39.442586 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:30:39.442643 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:30:39.468577 1446402 cri.go:96] found id: ""
	I1222 00:30:39.468591 1446402 logs.go:282] 0 containers: []
	W1222 00:30:39.468598 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:30:39.468603 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:30:39.468660 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:30:39.510316 1446402 cri.go:96] found id: ""
	I1222 00:30:39.510331 1446402 logs.go:282] 0 containers: []
	W1222 00:30:39.510339 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:30:39.510345 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:30:39.510407 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:30:39.540511 1446402 cri.go:96] found id: ""
	I1222 00:30:39.540526 1446402 logs.go:282] 0 containers: []
	W1222 00:30:39.540538 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:30:39.540544 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:30:39.540607 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:30:39.567225 1446402 cri.go:96] found id: ""
	I1222 00:30:39.567239 1446402 logs.go:282] 0 containers: []
	W1222 00:30:39.567246 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:30:39.567251 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:30:39.567313 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:30:39.592091 1446402 cri.go:96] found id: ""
	I1222 00:30:39.592105 1446402 logs.go:282] 0 containers: []
	W1222 00:30:39.592112 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:30:39.592119 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:30:39.592130 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:30:39.622343 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:30:39.622362 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:30:39.679425 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:30:39.679444 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:30:39.696213 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:30:39.696230 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:30:39.769659 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:30:39.760868   12812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:39.761671   12812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:39.763320   12812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:39.763916   12812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:39.765513   12812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:30:39.760868   12812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:39.761671   12812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:39.763320   12812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:39.763916   12812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:39.765513   12812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:30:39.769670 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:30:39.769680 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:30:42.336173 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:30:42.346558 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:30:42.346621 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:30:42.370787 1446402 cri.go:96] found id: ""
	I1222 00:30:42.370802 1446402 logs.go:282] 0 containers: []
	W1222 00:30:42.370810 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:30:42.370816 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:30:42.370877 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:30:42.395960 1446402 cri.go:96] found id: ""
	I1222 00:30:42.395973 1446402 logs.go:282] 0 containers: []
	W1222 00:30:42.395980 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:30:42.395985 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:30:42.396044 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:30:42.421477 1446402 cri.go:96] found id: ""
	I1222 00:30:42.421491 1446402 logs.go:282] 0 containers: []
	W1222 00:30:42.421498 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:30:42.421504 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:30:42.421564 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:30:42.446555 1446402 cri.go:96] found id: ""
	I1222 00:30:42.446569 1446402 logs.go:282] 0 containers: []
	W1222 00:30:42.446577 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:30:42.446582 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:30:42.446642 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:30:42.472081 1446402 cri.go:96] found id: ""
	I1222 00:30:42.472098 1446402 logs.go:282] 0 containers: []
	W1222 00:30:42.472105 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:30:42.472110 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:30:42.472169 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:30:42.511362 1446402 cri.go:96] found id: ""
	I1222 00:30:42.511375 1446402 logs.go:282] 0 containers: []
	W1222 00:30:42.511382 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:30:42.511388 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:30:42.511447 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:30:42.547512 1446402 cri.go:96] found id: ""
	I1222 00:30:42.547527 1446402 logs.go:282] 0 containers: []
	W1222 00:30:42.547533 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:30:42.547541 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:30:42.547551 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:30:42.615776 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:30:42.615799 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:30:42.646130 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:30:42.646146 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:30:42.705658 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:30:42.705677 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:30:42.721590 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:30:42.721610 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:30:42.787813 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:30:42.778324   12913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:42.779074   12913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:42.780775   12913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:42.781111   12913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:42.782752   12913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:30:42.778324   12913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:42.779074   12913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:42.780775   12913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:42.781111   12913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:42.782752   12913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:30:45.288531 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:30:45.303331 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:30:45.303401 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:30:45.338450 1446402 cri.go:96] found id: ""
	I1222 00:30:45.338484 1446402 logs.go:282] 0 containers: []
	W1222 00:30:45.338492 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:30:45.338499 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:30:45.338571 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:30:45.365473 1446402 cri.go:96] found id: ""
	I1222 00:30:45.365487 1446402 logs.go:282] 0 containers: []
	W1222 00:30:45.365494 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:30:45.365500 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:30:45.365561 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:30:45.390271 1446402 cri.go:96] found id: ""
	I1222 00:30:45.390285 1446402 logs.go:282] 0 containers: []
	W1222 00:30:45.390292 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:30:45.390298 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:30:45.390357 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:30:45.414377 1446402 cri.go:96] found id: ""
	I1222 00:30:45.414391 1446402 logs.go:282] 0 containers: []
	W1222 00:30:45.414398 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:30:45.414405 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:30:45.414465 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:30:45.443708 1446402 cri.go:96] found id: ""
	I1222 00:30:45.443722 1446402 logs.go:282] 0 containers: []
	W1222 00:30:45.443729 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:30:45.443735 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:30:45.443800 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:30:45.469111 1446402 cri.go:96] found id: ""
	I1222 00:30:45.469126 1446402 logs.go:282] 0 containers: []
	W1222 00:30:45.469133 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:30:45.469138 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:30:45.469199 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:30:45.506648 1446402 cri.go:96] found id: ""
	I1222 00:30:45.506662 1446402 logs.go:282] 0 containers: []
	W1222 00:30:45.506670 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:30:45.506678 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:30:45.506688 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:30:45.570224 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:30:45.570244 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:30:45.587665 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:30:45.587682 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:30:45.658642 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:30:45.649729   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:45.650417   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:45.652145   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:45.652819   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:45.654615   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:30:45.649729   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:45.650417   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:45.652145   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:45.652819   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:45.654615   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:30:45.658668 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:30:45.658680 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:30:45.726278 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:30:45.726296 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:30:48.258377 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:30:48.269041 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:30:48.269106 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:30:48.296090 1446402 cri.go:96] found id: ""
	I1222 00:30:48.296110 1446402 logs.go:282] 0 containers: []
	W1222 00:30:48.296118 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:30:48.296124 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:30:48.296189 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:30:48.324810 1446402 cri.go:96] found id: ""
	I1222 00:30:48.324824 1446402 logs.go:282] 0 containers: []
	W1222 00:30:48.324838 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:30:48.324844 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:30:48.324907 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:30:48.355386 1446402 cri.go:96] found id: ""
	I1222 00:30:48.355401 1446402 logs.go:282] 0 containers: []
	W1222 00:30:48.355408 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:30:48.355413 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:30:48.355478 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:30:48.382715 1446402 cri.go:96] found id: ""
	I1222 00:30:48.382738 1446402 logs.go:282] 0 containers: []
	W1222 00:30:48.382746 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:30:48.382752 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:30:48.382829 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:30:48.408554 1446402 cri.go:96] found id: ""
	I1222 00:30:48.408567 1446402 logs.go:282] 0 containers: []
	W1222 00:30:48.408574 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:30:48.408580 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:30:48.408643 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:30:48.434270 1446402 cri.go:96] found id: ""
	I1222 00:30:48.434293 1446402 logs.go:282] 0 containers: []
	W1222 00:30:48.434300 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:30:48.434306 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:30:48.434374 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:30:48.459881 1446402 cri.go:96] found id: ""
	I1222 00:30:48.459895 1446402 logs.go:282] 0 containers: []
	W1222 00:30:48.459903 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:30:48.459911 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:30:48.459921 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:30:48.517466 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:30:48.517484 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:30:48.537053 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:30:48.537070 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:30:48.604854 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:30:48.595608   13115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:48.596514   13115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:48.598305   13115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:48.599000   13115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:48.600577   13115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:30:48.595608   13115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:48.596514   13115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:48.598305   13115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:48.599000   13115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:48.600577   13115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:30:48.604864 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:30:48.604874 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:30:48.671361 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:30:48.671387 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:30:51.200853 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:30:51.211776 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:30:51.211839 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:30:51.238170 1446402 cri.go:96] found id: ""
	I1222 00:30:51.238186 1446402 logs.go:282] 0 containers: []
	W1222 00:30:51.238194 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:30:51.238199 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:30:51.238268 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:30:51.269105 1446402 cri.go:96] found id: ""
	I1222 00:30:51.269134 1446402 logs.go:282] 0 containers: []
	W1222 00:30:51.269142 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:30:51.269148 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:30:51.269219 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:30:51.293434 1446402 cri.go:96] found id: ""
	I1222 00:30:51.293457 1446402 logs.go:282] 0 containers: []
	W1222 00:30:51.293464 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:30:51.293470 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:30:51.293541 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:30:51.319040 1446402 cri.go:96] found id: ""
	I1222 00:30:51.319055 1446402 logs.go:282] 0 containers: []
	W1222 00:30:51.319062 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:30:51.319068 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:30:51.319130 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:30:51.348957 1446402 cri.go:96] found id: ""
	I1222 00:30:51.348974 1446402 logs.go:282] 0 containers: []
	W1222 00:30:51.348982 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:30:51.348987 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:30:51.349051 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:30:51.374220 1446402 cri.go:96] found id: ""
	I1222 00:30:51.374234 1446402 logs.go:282] 0 containers: []
	W1222 00:30:51.374242 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:30:51.374248 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:30:51.374308 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:30:51.399159 1446402 cri.go:96] found id: ""
	I1222 00:30:51.399173 1446402 logs.go:282] 0 containers: []
	W1222 00:30:51.399180 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:30:51.399188 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:30:51.399198 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:30:51.459029 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:30:51.459048 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:30:51.475298 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:30:51.475315 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:30:51.566963 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:30:51.557344   13215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:51.558248   13215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:51.560695   13215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:51.561017   13215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:51.562619   13215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:30:51.557344   13215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:51.558248   13215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:51.560695   13215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:51.561017   13215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:51.562619   13215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:30:51.566987 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:30:51.566997 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:30:51.629274 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:30:51.629295 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:30:54.157280 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:30:54.168037 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:30:54.168148 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:30:54.193307 1446402 cri.go:96] found id: ""
	I1222 00:30:54.193321 1446402 logs.go:282] 0 containers: []
	W1222 00:30:54.193328 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:30:54.193333 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:30:54.193396 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:30:54.219101 1446402 cri.go:96] found id: ""
	I1222 00:30:54.219115 1446402 logs.go:282] 0 containers: []
	W1222 00:30:54.219123 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:30:54.219128 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:30:54.219194 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:30:54.246374 1446402 cri.go:96] found id: ""
	I1222 00:30:54.246389 1446402 logs.go:282] 0 containers: []
	W1222 00:30:54.246396 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:30:54.246407 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:30:54.246465 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:30:54.271786 1446402 cri.go:96] found id: ""
	I1222 00:30:54.271801 1446402 logs.go:282] 0 containers: []
	W1222 00:30:54.271808 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:30:54.271813 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:30:54.271879 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:30:54.297101 1446402 cri.go:96] found id: ""
	I1222 00:30:54.297116 1446402 logs.go:282] 0 containers: []
	W1222 00:30:54.297123 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:30:54.297128 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:30:54.297187 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:30:54.321971 1446402 cri.go:96] found id: ""
	I1222 00:30:54.321984 1446402 logs.go:282] 0 containers: []
	W1222 00:30:54.321991 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:30:54.321997 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:30:54.322057 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:30:54.347313 1446402 cri.go:96] found id: ""
	I1222 00:30:54.347327 1446402 logs.go:282] 0 containers: []
	W1222 00:30:54.347334 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:30:54.347342 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:30:54.347353 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:30:54.403888 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:30:54.403909 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:30:54.419766 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:30:54.419782 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:30:54.484682 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:30:54.476870   13317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:54.477516   13317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:54.479088   13317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:54.479419   13317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:54.480904   13317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:30:54.476870   13317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:54.477516   13317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:54.479088   13317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:54.479419   13317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:54.480904   13317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:30:54.484693 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:30:54.484703 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:30:54.552360 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:30:54.552378 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:30:57.081711 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:30:57.092202 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:30:57.092266 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:30:57.117391 1446402 cri.go:96] found id: ""
	I1222 00:30:57.117405 1446402 logs.go:282] 0 containers: []
	W1222 00:30:57.117412 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:30:57.117419 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:30:57.117479 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:30:57.143247 1446402 cri.go:96] found id: ""
	I1222 00:30:57.143261 1446402 logs.go:282] 0 containers: []
	W1222 00:30:57.143269 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:30:57.143274 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:30:57.143336 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:30:57.167819 1446402 cri.go:96] found id: ""
	I1222 00:30:57.167833 1446402 logs.go:282] 0 containers: []
	W1222 00:30:57.167840 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:30:57.167845 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:30:57.167907 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:30:57.199021 1446402 cri.go:96] found id: ""
	I1222 00:30:57.199036 1446402 logs.go:282] 0 containers: []
	W1222 00:30:57.199043 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:30:57.199049 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:30:57.199108 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:30:57.222971 1446402 cri.go:96] found id: ""
	I1222 00:30:57.222986 1446402 logs.go:282] 0 containers: []
	W1222 00:30:57.222993 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:30:57.222999 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:30:57.223058 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:30:57.248778 1446402 cri.go:96] found id: ""
	I1222 00:30:57.248792 1446402 logs.go:282] 0 containers: []
	W1222 00:30:57.248800 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:30:57.248806 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:30:57.248865 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:30:57.274281 1446402 cri.go:96] found id: ""
	I1222 00:30:57.274294 1446402 logs.go:282] 0 containers: []
	W1222 00:30:57.274301 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:30:57.274309 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:30:57.274319 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:30:57.336861 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:30:57.336882 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:30:57.365636 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:30:57.365661 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:30:57.423967 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:30:57.423989 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:30:57.440127 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:30:57.440145 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:30:57.509798 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:30:57.500225   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:57.501062   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:57.503871   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:57.504255   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:57.505775   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:30:57.500225   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:57.501062   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:57.503871   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:57.504255   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:57.505775   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:31:00.010205 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:31:00.104650 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:31:00.104734 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:31:00.179982 1446402 cri.go:96] found id: ""
	I1222 00:31:00.180032 1446402 logs.go:282] 0 containers: []
	W1222 00:31:00.180041 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:31:00.180071 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:31:00.180239 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:31:00.284701 1446402 cri.go:96] found id: ""
	I1222 00:31:00.284717 1446402 logs.go:282] 0 containers: []
	W1222 00:31:00.284725 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:31:00.284731 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:31:00.284803 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:31:00.386635 1446402 cri.go:96] found id: ""
	I1222 00:31:00.386652 1446402 logs.go:282] 0 containers: []
	W1222 00:31:00.386659 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:31:00.386665 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:31:00.386735 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:31:00.427920 1446402 cri.go:96] found id: ""
	I1222 00:31:00.427944 1446402 logs.go:282] 0 containers: []
	W1222 00:31:00.427959 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:31:00.427966 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:31:00.428040 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:31:00.465116 1446402 cri.go:96] found id: ""
	I1222 00:31:00.465134 1446402 logs.go:282] 0 containers: []
	W1222 00:31:00.465144 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:31:00.465151 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:31:00.465232 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:31:00.499645 1446402 cri.go:96] found id: ""
	I1222 00:31:00.499660 1446402 logs.go:282] 0 containers: []
	W1222 00:31:00.499667 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:31:00.499673 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:31:00.499747 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:31:00.537565 1446402 cri.go:96] found id: ""
	I1222 00:31:00.537582 1446402 logs.go:282] 0 containers: []
	W1222 00:31:00.537595 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:31:00.537604 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:31:00.537615 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:31:00.575552 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:31:00.575567 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:31:00.633041 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:31:00.633063 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:31:00.649172 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:31:00.649187 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:31:00.724351 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:31:00.712002   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:00.712620   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:00.717220   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:00.718158   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:00.720021   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:31:00.712002   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:00.712620   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:00.717220   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:00.718158   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:00.720021   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:31:00.724361 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:31:00.724372 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:31:03.287306 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:31:03.298001 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:31:03.298072 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:31:03.324825 1446402 cri.go:96] found id: ""
	I1222 00:31:03.324840 1446402 logs.go:282] 0 containers: []
	W1222 00:31:03.324847 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:31:03.324859 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:31:03.324922 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:31:03.350917 1446402 cri.go:96] found id: ""
	I1222 00:31:03.350931 1446402 logs.go:282] 0 containers: []
	W1222 00:31:03.350939 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:31:03.350944 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:31:03.351006 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:31:03.379670 1446402 cri.go:96] found id: ""
	I1222 00:31:03.379685 1446402 logs.go:282] 0 containers: []
	W1222 00:31:03.379692 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:31:03.379697 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:31:03.379757 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:31:03.404478 1446402 cri.go:96] found id: ""
	I1222 00:31:03.404492 1446402 logs.go:282] 0 containers: []
	W1222 00:31:03.404499 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:31:03.404505 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:31:03.404566 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:31:03.433469 1446402 cri.go:96] found id: ""
	I1222 00:31:03.433483 1446402 logs.go:282] 0 containers: []
	W1222 00:31:03.433491 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:31:03.433496 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:31:03.433559 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:31:03.458710 1446402 cri.go:96] found id: ""
	I1222 00:31:03.458724 1446402 logs.go:282] 0 containers: []
	W1222 00:31:03.458731 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:31:03.458737 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:31:03.458798 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:31:03.489628 1446402 cri.go:96] found id: ""
	I1222 00:31:03.489641 1446402 logs.go:282] 0 containers: []
	W1222 00:31:03.489648 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:31:03.489656 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:31:03.489666 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:31:03.561791 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:31:03.561811 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:31:03.591660 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:31:03.591676 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:31:03.649546 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:31:03.649564 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:31:03.665699 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:31:03.665717 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:31:03.732939 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:31:03.723537   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:03.725016   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:03.725617   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:03.726715   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:03.727036   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:31:03.723537   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:03.725016   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:03.725617   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:03.726715   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:03.727036   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:31:06.234625 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:31:06.245401 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:31:06.245464 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:31:06.272079 1446402 cri.go:96] found id: ""
	I1222 00:31:06.272093 1446402 logs.go:282] 0 containers: []
	W1222 00:31:06.272100 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:31:06.272105 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:31:06.272166 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:31:06.297857 1446402 cri.go:96] found id: ""
	I1222 00:31:06.297871 1446402 logs.go:282] 0 containers: []
	W1222 00:31:06.297881 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:31:06.297886 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:31:06.297947 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:31:06.323563 1446402 cri.go:96] found id: ""
	I1222 00:31:06.323578 1446402 logs.go:282] 0 containers: []
	W1222 00:31:06.323585 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:31:06.323591 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:31:06.323654 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:31:06.352113 1446402 cri.go:96] found id: ""
	I1222 00:31:06.352128 1446402 logs.go:282] 0 containers: []
	W1222 00:31:06.352135 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:31:06.352140 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:31:06.352201 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:31:06.383883 1446402 cri.go:96] found id: ""
	I1222 00:31:06.383897 1446402 logs.go:282] 0 containers: []
	W1222 00:31:06.383906 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:31:06.383911 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:31:06.383980 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:31:06.410293 1446402 cri.go:96] found id: ""
	I1222 00:31:06.410307 1446402 logs.go:282] 0 containers: []
	W1222 00:31:06.410314 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:31:06.410319 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:31:06.410379 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:31:06.436428 1446402 cri.go:96] found id: ""
	I1222 00:31:06.436442 1446402 logs.go:282] 0 containers: []
	W1222 00:31:06.436449 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:31:06.436457 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:31:06.436467 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:31:06.493371 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:31:06.493391 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:31:06.511382 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:31:06.511400 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:31:06.582246 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:31:06.573607   13739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:06.574274   13739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:06.576164   13739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:06.576589   13739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:06.578115   13739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:31:06.573607   13739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:06.574274   13739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:06.576164   13739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:06.576589   13739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:06.578115   13739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:31:06.582256 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:31:06.582266 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:31:06.644909 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:31:06.644931 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:31:09.176116 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:31:09.186886 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:31:09.186957 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:31:09.212045 1446402 cri.go:96] found id: ""
	I1222 00:31:09.212081 1446402 logs.go:282] 0 containers: []
	W1222 00:31:09.212088 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:31:09.212094 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:31:09.212169 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:31:09.237345 1446402 cri.go:96] found id: ""
	I1222 00:31:09.237360 1446402 logs.go:282] 0 containers: []
	W1222 00:31:09.237367 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:31:09.237373 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:31:09.237435 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:31:09.262938 1446402 cri.go:96] found id: ""
	I1222 00:31:09.262953 1446402 logs.go:282] 0 containers: []
	W1222 00:31:09.262960 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:31:09.262966 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:31:09.263027 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:31:09.288202 1446402 cri.go:96] found id: ""
	I1222 00:31:09.288216 1446402 logs.go:282] 0 containers: []
	W1222 00:31:09.288223 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:31:09.288228 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:31:09.288291 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:31:09.313061 1446402 cri.go:96] found id: ""
	I1222 00:31:09.313075 1446402 logs.go:282] 0 containers: []
	W1222 00:31:09.313083 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:31:09.313088 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:31:09.313151 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:31:09.342668 1446402 cri.go:96] found id: ""
	I1222 00:31:09.342683 1446402 logs.go:282] 0 containers: []
	W1222 00:31:09.342691 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:31:09.342696 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:31:09.342760 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:31:09.370215 1446402 cri.go:96] found id: ""
	I1222 00:31:09.370239 1446402 logs.go:282] 0 containers: []
	W1222 00:31:09.370249 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:31:09.370258 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:31:09.370270 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:31:09.433823 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:31:09.425161   13835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:09.425608   13835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:09.427450   13835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:09.428117   13835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:09.429893   13835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:31:09.425161   13835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:09.425608   13835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:09.427450   13835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:09.428117   13835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:09.429893   13835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:31:09.433834 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:31:09.433846 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:31:09.496002 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:31:09.496024 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:31:09.538432 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:31:09.538457 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:31:09.599912 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:31:09.599933 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:31:12.117068 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:31:12.128268 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:31:12.128331 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:31:12.154851 1446402 cri.go:96] found id: ""
	I1222 00:31:12.154865 1446402 logs.go:282] 0 containers: []
	W1222 00:31:12.154873 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:31:12.154878 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:31:12.154961 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:31:12.180838 1446402 cri.go:96] found id: ""
	I1222 00:31:12.180852 1446402 logs.go:282] 0 containers: []
	W1222 00:31:12.180860 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:31:12.180865 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:31:12.180927 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:31:12.205653 1446402 cri.go:96] found id: ""
	I1222 00:31:12.205667 1446402 logs.go:282] 0 containers: []
	W1222 00:31:12.205683 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:31:12.205689 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:31:12.205760 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:31:12.232339 1446402 cri.go:96] found id: ""
	I1222 00:31:12.232352 1446402 logs.go:282] 0 containers: []
	W1222 00:31:12.232360 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:31:12.232365 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:31:12.232425 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:31:12.257997 1446402 cri.go:96] found id: ""
	I1222 00:31:12.258013 1446402 logs.go:282] 0 containers: []
	W1222 00:31:12.258020 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:31:12.258026 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:31:12.258113 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:31:12.282449 1446402 cri.go:96] found id: ""
	I1222 00:31:12.282464 1446402 logs.go:282] 0 containers: []
	W1222 00:31:12.282472 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:31:12.282478 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:31:12.282548 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:31:12.308351 1446402 cri.go:96] found id: ""
	I1222 00:31:12.308365 1446402 logs.go:282] 0 containers: []
	W1222 00:31:12.308372 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:31:12.308380 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:31:12.308391 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:31:12.365268 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:31:12.365286 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:31:12.381163 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:31:12.381180 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:31:12.448592 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:31:12.440189   13948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:12.440992   13948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:12.442650   13948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:12.443125   13948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:12.444728   13948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:31:12.440189   13948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:12.440992   13948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:12.442650   13948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:12.443125   13948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:12.444728   13948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:31:12.448603 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:31:12.448614 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:31:12.512421 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:31:12.512440 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:31:15.042734 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:31:15.076968 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:31:15.077038 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:31:15.105454 1446402 cri.go:96] found id: ""
	I1222 00:31:15.105469 1446402 logs.go:282] 0 containers: []
	W1222 00:31:15.105477 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:31:15.105484 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:31:15.105548 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:31:15.133491 1446402 cri.go:96] found id: ""
	I1222 00:31:15.133517 1446402 logs.go:282] 0 containers: []
	W1222 00:31:15.133525 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:31:15.133531 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:31:15.133610 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:31:15.161141 1446402 cri.go:96] found id: ""
	I1222 00:31:15.161155 1446402 logs.go:282] 0 containers: []
	W1222 00:31:15.161162 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:31:15.161168 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:31:15.161243 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:31:15.189035 1446402 cri.go:96] found id: ""
	I1222 00:31:15.189062 1446402 logs.go:282] 0 containers: []
	W1222 00:31:15.189071 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:31:15.189077 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:31:15.189153 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:31:15.215453 1446402 cri.go:96] found id: ""
	I1222 00:31:15.215467 1446402 logs.go:282] 0 containers: []
	W1222 00:31:15.215474 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:31:15.215479 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:31:15.215542 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:31:15.241518 1446402 cri.go:96] found id: ""
	I1222 00:31:15.241542 1446402 logs.go:282] 0 containers: []
	W1222 00:31:15.241550 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:31:15.241556 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:31:15.241627 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:31:15.270847 1446402 cri.go:96] found id: ""
	I1222 00:31:15.270862 1446402 logs.go:282] 0 containers: []
	W1222 00:31:15.270878 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:31:15.270886 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:31:15.270896 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:31:15.329892 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:31:15.329919 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:31:15.345769 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:31:15.345787 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:31:15.412686 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:31:15.403867   14050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:15.404399   14050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:15.406057   14050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:15.406571   14050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:15.408071   14050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:31:15.403867   14050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:15.404399   14050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:15.406057   14050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:15.406571   14050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:15.408071   14050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:31:15.412697 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:31:15.412708 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:31:15.475513 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:31:15.475533 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:31:18.013729 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:31:18.025498 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:31:18.025570 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:31:18.051451 1446402 cri.go:96] found id: ""
	I1222 00:31:18.051466 1446402 logs.go:282] 0 containers: []
	W1222 00:31:18.051473 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:31:18.051478 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:31:18.051540 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:31:18.078412 1446402 cri.go:96] found id: ""
	I1222 00:31:18.078428 1446402 logs.go:282] 0 containers: []
	W1222 00:31:18.078436 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:31:18.078442 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:31:18.078511 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:31:18.105039 1446402 cri.go:96] found id: ""
	I1222 00:31:18.105054 1446402 logs.go:282] 0 containers: []
	W1222 00:31:18.105062 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:31:18.105067 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:31:18.105129 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:31:18.132285 1446402 cri.go:96] found id: ""
	I1222 00:31:18.132300 1446402 logs.go:282] 0 containers: []
	W1222 00:31:18.132308 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:31:18.132314 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:31:18.132379 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:31:18.160762 1446402 cri.go:96] found id: ""
	I1222 00:31:18.160781 1446402 logs.go:282] 0 containers: []
	W1222 00:31:18.160788 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:31:18.160794 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:31:18.160855 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:31:18.187281 1446402 cri.go:96] found id: ""
	I1222 00:31:18.187295 1446402 logs.go:282] 0 containers: []
	W1222 00:31:18.187303 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:31:18.187308 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:31:18.187369 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:31:18.214033 1446402 cri.go:96] found id: ""
	I1222 00:31:18.214048 1446402 logs.go:282] 0 containers: []
	W1222 00:31:18.214055 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:31:18.214062 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:31:18.214072 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:31:18.274937 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:31:18.274957 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:31:18.291496 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:31:18.291514 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:31:18.356830 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:31:18.348577   14155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:18.349325   14155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:18.351002   14155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:18.351500   14155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:18.353025   14155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:31:18.348577   14155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:18.349325   14155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:18.351002   14155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:18.351500   14155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:18.353025   14155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:31:18.356841 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:31:18.356851 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:31:18.420006 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:31:18.420026 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:31:20.955836 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:31:20.966430 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:31:20.966499 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:31:20.992202 1446402 cri.go:96] found id: ""
	I1222 00:31:20.992216 1446402 logs.go:282] 0 containers: []
	W1222 00:31:20.992223 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:31:20.992229 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:31:20.992292 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:31:21.020435 1446402 cri.go:96] found id: ""
	I1222 00:31:21.020449 1446402 logs.go:282] 0 containers: []
	W1222 00:31:21.020456 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:31:21.020462 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:31:21.020525 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:31:21.045920 1446402 cri.go:96] found id: ""
	I1222 00:31:21.045934 1446402 logs.go:282] 0 containers: []
	W1222 00:31:21.045940 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:31:21.045945 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:31:21.046007 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:31:21.069898 1446402 cri.go:96] found id: ""
	I1222 00:31:21.069912 1446402 logs.go:282] 0 containers: []
	W1222 00:31:21.069920 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:31:21.069926 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:31:21.069986 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:31:21.096061 1446402 cri.go:96] found id: ""
	I1222 00:31:21.096075 1446402 logs.go:282] 0 containers: []
	W1222 00:31:21.096082 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:31:21.096088 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:31:21.096152 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:31:21.121380 1446402 cri.go:96] found id: ""
	I1222 00:31:21.121394 1446402 logs.go:282] 0 containers: []
	W1222 00:31:21.121401 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:31:21.121407 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:31:21.121473 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:31:21.147060 1446402 cri.go:96] found id: ""
	I1222 00:31:21.147083 1446402 logs.go:282] 0 containers: []
	W1222 00:31:21.147091 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:31:21.147098 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:31:21.147110 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:31:21.163066 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:31:21.163085 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:31:21.229457 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:31:21.220665   14257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:21.221438   14257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:21.223287   14257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:21.223780   14257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:21.225557   14257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:31:21.220665   14257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:21.221438   14257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:21.223287   14257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:21.223780   14257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:21.225557   14257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:31:21.229467 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:31:21.229482 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:31:21.296323 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:31:21.296342 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:31:21.329392 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:31:21.329409 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:31:23.886587 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:31:23.896889 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:31:23.896949 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:31:23.921855 1446402 cri.go:96] found id: ""
	I1222 00:31:23.921870 1446402 logs.go:282] 0 containers: []
	W1222 00:31:23.921878 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:31:23.921883 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:31:23.921943 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:31:23.947445 1446402 cri.go:96] found id: ""
	I1222 00:31:23.947459 1446402 logs.go:282] 0 containers: []
	W1222 00:31:23.947466 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:31:23.947471 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:31:23.947532 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:31:23.973150 1446402 cri.go:96] found id: ""
	I1222 00:31:23.973164 1446402 logs.go:282] 0 containers: []
	W1222 00:31:23.973171 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:31:23.973176 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:31:23.973236 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:31:24.000119 1446402 cri.go:96] found id: ""
	I1222 00:31:24.000133 1446402 logs.go:282] 0 containers: []
	W1222 00:31:24.000140 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:31:24.000145 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:31:24.000208 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:31:24.028319 1446402 cri.go:96] found id: ""
	I1222 00:31:24.028333 1446402 logs.go:282] 0 containers: []
	W1222 00:31:24.028341 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:31:24.028346 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:31:24.028416 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:31:24.054514 1446402 cri.go:96] found id: ""
	I1222 00:31:24.054528 1446402 logs.go:282] 0 containers: []
	W1222 00:31:24.054536 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:31:24.054541 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:31:24.054623 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:31:24.079783 1446402 cri.go:96] found id: ""
	I1222 00:31:24.079796 1446402 logs.go:282] 0 containers: []
	W1222 00:31:24.079804 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:31:24.079812 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:31:24.079823 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:31:24.136543 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:31:24.136563 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:31:24.152385 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:31:24.152402 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:31:24.219394 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:31:24.210731   14362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:24.211515   14362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:24.213293   14362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:24.213872   14362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:24.215420   14362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:31:24.210731   14362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:24.211515   14362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:24.213293   14362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:24.213872   14362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:24.215420   14362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:31:24.219403 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:31:24.219413 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:31:24.282313 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:31:24.282331 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:31:26.811961 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:31:26.822374 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:31:26.822443 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:31:26.851730 1446402 cri.go:96] found id: ""
	I1222 00:31:26.851745 1446402 logs.go:282] 0 containers: []
	W1222 00:31:26.851753 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:31:26.851758 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:31:26.851820 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:31:26.876518 1446402 cri.go:96] found id: ""
	I1222 00:31:26.876533 1446402 logs.go:282] 0 containers: []
	W1222 00:31:26.876540 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:31:26.876545 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:31:26.876614 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:31:26.906243 1446402 cri.go:96] found id: ""
	I1222 00:31:26.906258 1446402 logs.go:282] 0 containers: []
	W1222 00:31:26.906265 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:31:26.906271 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:31:26.906332 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:31:26.933029 1446402 cri.go:96] found id: ""
	I1222 00:31:26.933043 1446402 logs.go:282] 0 containers: []
	W1222 00:31:26.933050 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:31:26.933056 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:31:26.933124 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:31:26.962389 1446402 cri.go:96] found id: ""
	I1222 00:31:26.962404 1446402 logs.go:282] 0 containers: []
	W1222 00:31:26.962411 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:31:26.962417 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:31:26.962478 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:31:26.986566 1446402 cri.go:96] found id: ""
	I1222 00:31:26.986579 1446402 logs.go:282] 0 containers: []
	W1222 00:31:26.986587 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:31:26.986593 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:31:26.986654 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:31:27.013857 1446402 cri.go:96] found id: ""
	I1222 00:31:27.013872 1446402 logs.go:282] 0 containers: []
	W1222 00:31:27.013885 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:31:27.013896 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:31:27.013907 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:31:27.072155 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:31:27.072174 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:31:27.088000 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:31:27.088018 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:31:27.155219 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:31:27.146314   14467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:27.147060   14467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:27.148896   14467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:27.149539   14467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:27.151255   14467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:31:27.146314   14467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:27.147060   14467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:27.148896   14467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:27.149539   14467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:27.151255   14467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:31:27.155229 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:31:27.155240 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:31:27.220122 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:31:27.220142 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:31:29.756602 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:31:29.767503 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:31:29.767576 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:31:29.796758 1446402 cri.go:96] found id: ""
	I1222 00:31:29.796773 1446402 logs.go:282] 0 containers: []
	W1222 00:31:29.796781 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:31:29.796786 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:31:29.796848 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:31:29.826111 1446402 cri.go:96] found id: ""
	I1222 00:31:29.826125 1446402 logs.go:282] 0 containers: []
	W1222 00:31:29.826133 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:31:29.826138 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:31:29.826199 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:31:29.851803 1446402 cri.go:96] found id: ""
	I1222 00:31:29.851817 1446402 logs.go:282] 0 containers: []
	W1222 00:31:29.851827 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:31:29.851833 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:31:29.851893 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:31:29.877952 1446402 cri.go:96] found id: ""
	I1222 00:31:29.877966 1446402 logs.go:282] 0 containers: []
	W1222 00:31:29.877973 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:31:29.877979 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:31:29.878041 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:31:29.902393 1446402 cri.go:96] found id: ""
	I1222 00:31:29.902406 1446402 logs.go:282] 0 containers: []
	W1222 00:31:29.902414 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:31:29.902419 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:31:29.902499 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:31:29.930875 1446402 cri.go:96] found id: ""
	I1222 00:31:29.930889 1446402 logs.go:282] 0 containers: []
	W1222 00:31:29.930896 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:31:29.930901 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:31:29.930961 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:31:29.954467 1446402 cri.go:96] found id: ""
	I1222 00:31:29.954481 1446402 logs.go:282] 0 containers: []
	W1222 00:31:29.954488 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:31:29.954496 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:31:29.954506 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:31:30.022300 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:31:30.022322 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:31:30.101450 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:31:30.101468 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:31:30.160615 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:31:30.160637 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:31:30.177543 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:31:30.177570 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:31:30.250821 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:31:30.242174   14583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:30.242862   14583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:30.243862   14583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:30.244583   14583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:30.246440   14583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:31:30.242174   14583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:30.242862   14583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:30.243862   14583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:30.244583   14583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:30.246440   14583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:31:32.751739 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:31:32.762856 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:31:32.762918 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:31:32.788176 1446402 cri.go:96] found id: ""
	I1222 00:31:32.788191 1446402 logs.go:282] 0 containers: []
	W1222 00:31:32.788197 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:31:32.788203 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:31:32.788264 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:31:32.815561 1446402 cri.go:96] found id: ""
	I1222 00:31:32.815575 1446402 logs.go:282] 0 containers: []
	W1222 00:31:32.815582 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:31:32.815587 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:31:32.815648 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:31:32.840208 1446402 cri.go:96] found id: ""
	I1222 00:31:32.840222 1446402 logs.go:282] 0 containers: []
	W1222 00:31:32.840229 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:31:32.840235 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:31:32.840298 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:31:32.865041 1446402 cri.go:96] found id: ""
	I1222 00:31:32.865055 1446402 logs.go:282] 0 containers: []
	W1222 00:31:32.865062 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:31:32.865068 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:31:32.865127 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:31:32.891852 1446402 cri.go:96] found id: ""
	I1222 00:31:32.891871 1446402 logs.go:282] 0 containers: []
	W1222 00:31:32.891879 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:31:32.891884 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:31:32.891956 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:31:32.916991 1446402 cri.go:96] found id: ""
	I1222 00:31:32.917005 1446402 logs.go:282] 0 containers: []
	W1222 00:31:32.917013 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:31:32.917018 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:31:32.917078 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:31:32.944551 1446402 cri.go:96] found id: ""
	I1222 00:31:32.944564 1446402 logs.go:282] 0 containers: []
	W1222 00:31:32.944571 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:31:32.944579 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:31:32.944589 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:31:33.001246 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:31:33.001270 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:31:33.021275 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:31:33.021294 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:31:33.093331 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:31:33.085315   14674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:33.086145   14674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:33.087144   14674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:33.087857   14674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:33.089415   14674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:31:33.085315   14674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:33.086145   14674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:33.087144   14674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:33.087857   14674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:33.089415   14674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:31:33.093342 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:31:33.093353 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:31:33.155921 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:31:33.155942 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:31:35.686392 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:31:35.696748 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:31:35.696809 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:31:35.721721 1446402 cri.go:96] found id: ""
	I1222 00:31:35.721736 1446402 logs.go:282] 0 containers: []
	W1222 00:31:35.721743 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:31:35.721748 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:31:35.721836 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:31:35.769211 1446402 cri.go:96] found id: ""
	I1222 00:31:35.769225 1446402 logs.go:282] 0 containers: []
	W1222 00:31:35.769232 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:31:35.769237 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:31:35.769296 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:31:35.801836 1446402 cri.go:96] found id: ""
	I1222 00:31:35.801850 1446402 logs.go:282] 0 containers: []
	W1222 00:31:35.801857 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:31:35.801863 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:31:35.801925 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:31:35.829689 1446402 cri.go:96] found id: ""
	I1222 00:31:35.829703 1446402 logs.go:282] 0 containers: []
	W1222 00:31:35.829711 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:31:35.829716 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:31:35.829775 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:31:35.855388 1446402 cri.go:96] found id: ""
	I1222 00:31:35.855403 1446402 logs.go:282] 0 containers: []
	W1222 00:31:35.855411 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:31:35.855417 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:31:35.855478 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:31:35.886055 1446402 cri.go:96] found id: ""
	I1222 00:31:35.886070 1446402 logs.go:282] 0 containers: []
	W1222 00:31:35.886105 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:31:35.886112 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:31:35.886177 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:31:35.911567 1446402 cri.go:96] found id: ""
	I1222 00:31:35.911581 1446402 logs.go:282] 0 containers: []
	W1222 00:31:35.911589 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:31:35.911596 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:31:35.911608 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:31:35.978738 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:31:35.969873   14773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:35.970644   14773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:35.972383   14773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:35.972972   14773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:35.974691   14773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:31:35.969873   14773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:35.970644   14773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:35.972383   14773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:35.972972   14773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:35.974691   14773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:31:35.978748 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:31:35.978761 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:31:36.043835 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:31:36.043857 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:31:36.072278 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:31:36.072294 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:31:36.133943 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:31:36.133963 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:31:38.650565 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:31:38.660954 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:31:38.661027 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:31:38.685765 1446402 cri.go:96] found id: ""
	I1222 00:31:38.685780 1446402 logs.go:282] 0 containers: []
	W1222 00:31:38.685787 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:31:38.685793 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:31:38.685859 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:31:38.711272 1446402 cri.go:96] found id: ""
	I1222 00:31:38.711287 1446402 logs.go:282] 0 containers: []
	W1222 00:31:38.711295 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:31:38.711300 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:31:38.711366 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:31:38.739201 1446402 cri.go:96] found id: ""
	I1222 00:31:38.739217 1446402 logs.go:282] 0 containers: []
	W1222 00:31:38.739224 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:31:38.739230 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:31:38.739299 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:31:38.769400 1446402 cri.go:96] found id: ""
	I1222 00:31:38.769414 1446402 logs.go:282] 0 containers: []
	W1222 00:31:38.769421 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:31:38.769426 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:31:38.769486 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:31:38.805681 1446402 cri.go:96] found id: ""
	I1222 00:31:38.805695 1446402 logs.go:282] 0 containers: []
	W1222 00:31:38.805704 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:31:38.805709 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:31:38.805770 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:31:38.831145 1446402 cri.go:96] found id: ""
	I1222 00:31:38.831160 1446402 logs.go:282] 0 containers: []
	W1222 00:31:38.831167 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:31:38.831172 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:31:38.831233 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:31:38.861111 1446402 cri.go:96] found id: ""
	I1222 00:31:38.861125 1446402 logs.go:282] 0 containers: []
	W1222 00:31:38.861132 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:31:38.861140 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:31:38.861150 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:31:38.917581 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:31:38.917601 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:31:38.934979 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:31:38.934997 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:31:39.009642 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:31:38.997287   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:38.997885   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:38.999441   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:39.000040   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:39.004739   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:31:38.997287   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:38.997885   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:38.999441   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:39.000040   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:39.004739   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:31:39.009654 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:31:39.009666 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:31:39.079837 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:31:39.079866 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:31:41.610509 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:31:41.620849 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:31:41.620915 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:31:41.645625 1446402 cri.go:96] found id: ""
	I1222 00:31:41.645639 1446402 logs.go:282] 0 containers: []
	W1222 00:31:41.645647 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:31:41.645652 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:31:41.645715 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:31:41.671325 1446402 cri.go:96] found id: ""
	I1222 00:31:41.671339 1446402 logs.go:282] 0 containers: []
	W1222 00:31:41.671347 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:31:41.671353 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:31:41.671413 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:31:41.695685 1446402 cri.go:96] found id: ""
	I1222 00:31:41.695699 1446402 logs.go:282] 0 containers: []
	W1222 00:31:41.695706 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:31:41.695712 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:31:41.695772 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:31:41.721021 1446402 cri.go:96] found id: ""
	I1222 00:31:41.721034 1446402 logs.go:282] 0 containers: []
	W1222 00:31:41.721042 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:31:41.721047 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:31:41.721108 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:31:41.757975 1446402 cri.go:96] found id: ""
	I1222 00:31:41.757990 1446402 logs.go:282] 0 containers: []
	W1222 00:31:41.757997 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:31:41.758002 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:31:41.758064 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:31:41.802251 1446402 cri.go:96] found id: ""
	I1222 00:31:41.802266 1446402 logs.go:282] 0 containers: []
	W1222 00:31:41.802273 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:31:41.802279 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:31:41.802339 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:31:41.835417 1446402 cri.go:96] found id: ""
	I1222 00:31:41.835433 1446402 logs.go:282] 0 containers: []
	W1222 00:31:41.835439 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:31:41.835447 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:31:41.835458 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:31:41.895808 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:31:41.895827 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:31:41.911760 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:31:41.911776 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:31:41.978878 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:31:41.969798   14988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:41.970501   14988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:41.972132   14988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:41.972665   14988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:41.974279   14988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:31:41.969798   14988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:41.970501   14988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:41.972132   14988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:41.972665   14988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:41.974279   14988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:31:41.978889 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:31:41.978900 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:31:42.043394 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:31:42.043415 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:31:44.576818 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:31:44.587175 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:31:44.587239 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:31:44.613386 1446402 cri.go:96] found id: ""
	I1222 00:31:44.613404 1446402 logs.go:282] 0 containers: []
	W1222 00:31:44.613411 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:31:44.613416 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:31:44.613479 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:31:44.642424 1446402 cri.go:96] found id: ""
	I1222 00:31:44.642444 1446402 logs.go:282] 0 containers: []
	W1222 00:31:44.642451 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:31:44.642456 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:31:44.642517 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:31:44.671623 1446402 cri.go:96] found id: ""
	I1222 00:31:44.671637 1446402 logs.go:282] 0 containers: []
	W1222 00:31:44.671645 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:31:44.671650 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:31:44.671720 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:31:44.697114 1446402 cri.go:96] found id: ""
	I1222 00:31:44.697128 1446402 logs.go:282] 0 containers: []
	W1222 00:31:44.697135 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:31:44.697140 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:31:44.697199 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:31:44.724199 1446402 cri.go:96] found id: ""
	I1222 00:31:44.724213 1446402 logs.go:282] 0 containers: []
	W1222 00:31:44.724220 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:31:44.724226 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:31:44.724298 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:31:44.765403 1446402 cri.go:96] found id: ""
	I1222 00:31:44.765417 1446402 logs.go:282] 0 containers: []
	W1222 00:31:44.765436 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:31:44.765443 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:31:44.765510 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:31:44.795984 1446402 cri.go:96] found id: ""
	I1222 00:31:44.795999 1446402 logs.go:282] 0 containers: []
	W1222 00:31:44.796017 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:31:44.796026 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:31:44.796037 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:31:44.855400 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:31:44.855420 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:31:44.872483 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:31:44.872501 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:31:44.941437 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:31:44.933277   15095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:44.933850   15095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:44.935365   15095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:44.935699   15095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:44.937038   15095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:31:44.933277   15095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:44.933850   15095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:44.935365   15095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:44.935699   15095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:44.937038   15095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:31:44.941449 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:31:44.941460 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:31:45.004528 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:31:45.004550 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:31:47.556363 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:31:47.566634 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:31:47.566695 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:31:47.593291 1446402 cri.go:96] found id: ""
	I1222 00:31:47.593305 1446402 logs.go:282] 0 containers: []
	W1222 00:31:47.593312 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:31:47.593318 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:31:47.593387 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:31:47.617921 1446402 cri.go:96] found id: ""
	I1222 00:31:47.617935 1446402 logs.go:282] 0 containers: []
	W1222 00:31:47.617942 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:31:47.617947 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:31:47.618007 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:31:47.644745 1446402 cri.go:96] found id: ""
	I1222 00:31:47.644759 1446402 logs.go:282] 0 containers: []
	W1222 00:31:47.644766 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:31:47.644772 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:31:47.644831 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:31:47.669635 1446402 cri.go:96] found id: ""
	I1222 00:31:47.669649 1446402 logs.go:282] 0 containers: []
	W1222 00:31:47.669656 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:31:47.669661 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:31:47.669721 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:31:47.696237 1446402 cri.go:96] found id: ""
	I1222 00:31:47.696251 1446402 logs.go:282] 0 containers: []
	W1222 00:31:47.696258 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:31:47.696263 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:31:47.696321 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:31:47.720858 1446402 cri.go:96] found id: ""
	I1222 00:31:47.720877 1446402 logs.go:282] 0 containers: []
	W1222 00:31:47.720884 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:31:47.720890 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:31:47.720950 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:31:47.759042 1446402 cri.go:96] found id: ""
	I1222 00:31:47.759056 1446402 logs.go:282] 0 containers: []
	W1222 00:31:47.759064 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:31:47.759071 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:31:47.759088 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:31:47.775637 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:31:47.775652 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:31:47.848304 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:31:47.837609   15197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:47.838360   15197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:47.840008   15197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:47.842174   15197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:47.842911   15197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:31:47.837609   15197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:47.838360   15197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:47.840008   15197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:47.842174   15197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:47.842911   15197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:31:47.848314 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:31:47.848326 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:31:47.910821 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:31:47.910839 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:31:47.939115 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:31:47.939131 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:31:50.495637 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:31:50.506061 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:31:50.506147 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:31:50.531619 1446402 cri.go:96] found id: ""
	I1222 00:31:50.531634 1446402 logs.go:282] 0 containers: []
	W1222 00:31:50.531641 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:31:50.531647 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:31:50.531707 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:31:50.556202 1446402 cri.go:96] found id: ""
	I1222 00:31:50.556215 1446402 logs.go:282] 0 containers: []
	W1222 00:31:50.556222 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:31:50.556228 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:31:50.556289 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:31:50.580637 1446402 cri.go:96] found id: ""
	I1222 00:31:50.580651 1446402 logs.go:282] 0 containers: []
	W1222 00:31:50.580658 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:31:50.580663 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:31:50.580726 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:31:50.605112 1446402 cri.go:96] found id: ""
	I1222 00:31:50.605126 1446402 logs.go:282] 0 containers: []
	W1222 00:31:50.605133 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:31:50.605138 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:31:50.605198 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:31:50.629268 1446402 cri.go:96] found id: ""
	I1222 00:31:50.629283 1446402 logs.go:282] 0 containers: []
	W1222 00:31:50.629290 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:31:50.629295 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:31:50.629356 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:31:50.655550 1446402 cri.go:96] found id: ""
	I1222 00:31:50.655564 1446402 logs.go:282] 0 containers: []
	W1222 00:31:50.655571 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:31:50.655576 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:31:50.655635 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:31:50.683838 1446402 cri.go:96] found id: ""
	I1222 00:31:50.683852 1446402 logs.go:282] 0 containers: []
	W1222 00:31:50.683859 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:31:50.683866 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:31:50.683877 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:31:50.739538 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:31:50.739556 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:31:50.759933 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:31:50.759948 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:31:50.837166 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:31:50.827625   15305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:50.828436   15305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:50.830329   15305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:50.830912   15305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:50.832535   15305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:31:50.827625   15305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:50.828436   15305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:50.830329   15305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:50.830912   15305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:50.832535   15305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:31:50.837177 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:31:50.837188 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:31:50.902694 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:31:50.902713 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:31:53.430394 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:31:53.441567 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:31:53.441627 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:31:53.468013 1446402 cri.go:96] found id: ""
	I1222 00:31:53.468027 1446402 logs.go:282] 0 containers: []
	W1222 00:31:53.468034 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:31:53.468039 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:31:53.468109 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:31:53.494162 1446402 cri.go:96] found id: ""
	I1222 00:31:53.494176 1446402 logs.go:282] 0 containers: []
	W1222 00:31:53.494183 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:31:53.494188 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:31:53.494248 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:31:53.524039 1446402 cri.go:96] found id: ""
	I1222 00:31:53.524061 1446402 logs.go:282] 0 containers: []
	W1222 00:31:53.524068 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:31:53.524074 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:31:53.524137 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:31:53.548965 1446402 cri.go:96] found id: ""
	I1222 00:31:53.548979 1446402 logs.go:282] 0 containers: []
	W1222 00:31:53.548987 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:31:53.548992 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:31:53.549054 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:31:53.580216 1446402 cri.go:96] found id: ""
	I1222 00:31:53.580231 1446402 logs.go:282] 0 containers: []
	W1222 00:31:53.580238 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:31:53.580244 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:31:53.580304 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:31:53.605286 1446402 cri.go:96] found id: ""
	I1222 00:31:53.605301 1446402 logs.go:282] 0 containers: []
	W1222 00:31:53.605308 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:31:53.605314 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:31:53.605391 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:31:53.630900 1446402 cri.go:96] found id: ""
	I1222 00:31:53.630915 1446402 logs.go:282] 0 containers: []
	W1222 00:31:53.630922 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:31:53.630930 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:31:53.630940 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:31:53.686921 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:31:53.686939 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:31:53.704267 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:31:53.704290 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:31:53.789032 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:31:53.778364   15410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:53.780153   15410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:53.781189   15410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:53.783313   15410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:53.784665   15410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:31:53.778364   15410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:53.780153   15410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:53.781189   15410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:53.783313   15410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:53.784665   15410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:31:53.789043 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:31:53.789054 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:31:53.855439 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:31:53.855459 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:31:56.386602 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:31:56.396636 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:31:56.396695 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:31:56.419622 1446402 cri.go:96] found id: ""
	I1222 00:31:56.419635 1446402 logs.go:282] 0 containers: []
	W1222 00:31:56.419642 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:31:56.419647 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:31:56.419711 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:31:56.443068 1446402 cri.go:96] found id: ""
	I1222 00:31:56.443082 1446402 logs.go:282] 0 containers: []
	W1222 00:31:56.443088 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:31:56.443094 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:31:56.443151 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:31:56.468547 1446402 cri.go:96] found id: ""
	I1222 00:31:56.468561 1446402 logs.go:282] 0 containers: []
	W1222 00:31:56.468568 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:31:56.468573 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:31:56.468639 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:31:56.496420 1446402 cri.go:96] found id: ""
	I1222 00:31:56.496434 1446402 logs.go:282] 0 containers: []
	W1222 00:31:56.496448 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:31:56.496453 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:31:56.496515 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:31:56.521822 1446402 cri.go:96] found id: ""
	I1222 00:31:56.521837 1446402 logs.go:282] 0 containers: []
	W1222 00:31:56.521844 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:31:56.521849 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:31:56.521910 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:31:56.548113 1446402 cri.go:96] found id: ""
	I1222 00:31:56.548127 1446402 logs.go:282] 0 containers: []
	W1222 00:31:56.548135 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:31:56.548142 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:31:56.548205 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:31:56.577150 1446402 cri.go:96] found id: ""
	I1222 00:31:56.577166 1446402 logs.go:282] 0 containers: []
	W1222 00:31:56.577173 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:31:56.577181 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:31:56.577191 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:31:56.635797 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:31:56.635817 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:31:56.651214 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:31:56.651230 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:31:56.716938 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:31:56.708096   15515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:56.708823   15515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:56.710450   15515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:56.710968   15515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:56.712514   15515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:31:56.708096   15515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:56.708823   15515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:56.710450   15515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:56.710968   15515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:56.712514   15515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:31:56.716948 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:31:56.716959 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:31:56.780730 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:31:56.780749 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:31:59.308156 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:31:59.318415 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:31:59.318476 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:31:59.343305 1446402 cri.go:96] found id: ""
	I1222 00:31:59.343319 1446402 logs.go:282] 0 containers: []
	W1222 00:31:59.343326 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:31:59.343332 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:31:59.343390 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:31:59.368501 1446402 cri.go:96] found id: ""
	I1222 00:31:59.368515 1446402 logs.go:282] 0 containers: []
	W1222 00:31:59.368523 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:31:59.368529 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:31:59.368595 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:31:59.394364 1446402 cri.go:96] found id: ""
	I1222 00:31:59.394378 1446402 logs.go:282] 0 containers: []
	W1222 00:31:59.394385 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:31:59.394391 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:31:59.394452 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:31:59.420068 1446402 cri.go:96] found id: ""
	I1222 00:31:59.420082 1446402 logs.go:282] 0 containers: []
	W1222 00:31:59.420089 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:31:59.420094 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:31:59.420160 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:31:59.444153 1446402 cri.go:96] found id: ""
	I1222 00:31:59.444167 1446402 logs.go:282] 0 containers: []
	W1222 00:31:59.444174 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:31:59.444179 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:31:59.444239 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:31:59.473812 1446402 cri.go:96] found id: ""
	I1222 00:31:59.473827 1446402 logs.go:282] 0 containers: []
	W1222 00:31:59.473834 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:31:59.473840 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:31:59.473901 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:31:59.502392 1446402 cri.go:96] found id: ""
	I1222 00:31:59.502405 1446402 logs.go:282] 0 containers: []
	W1222 00:31:59.502412 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:31:59.502420 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:31:59.502429 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:31:59.564094 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:31:59.564114 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:31:59.596168 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:31:59.596186 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:31:59.652216 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:31:59.652236 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:31:59.668263 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:31:59.668278 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:31:59.729801 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:31:59.722174   15630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:59.722594   15630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:59.724162   15630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:59.724501   15630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:59.726011   15630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:31:59.722174   15630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:59.722594   15630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:59.724162   15630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:59.724501   15630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:59.726011   15630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:32:02.230111 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:32:02.241018 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:32:02.241081 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:32:02.266489 1446402 cri.go:96] found id: ""
	I1222 00:32:02.266506 1446402 logs.go:282] 0 containers: []
	W1222 00:32:02.266514 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:32:02.266522 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:32:02.266583 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:32:02.291427 1446402 cri.go:96] found id: ""
	I1222 00:32:02.291451 1446402 logs.go:282] 0 containers: []
	W1222 00:32:02.291459 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:32:02.291465 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:32:02.291532 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:32:02.317575 1446402 cri.go:96] found id: ""
	I1222 00:32:02.317599 1446402 logs.go:282] 0 containers: []
	W1222 00:32:02.317607 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:32:02.317612 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:32:02.317683 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:32:02.346894 1446402 cri.go:96] found id: ""
	I1222 00:32:02.346918 1446402 logs.go:282] 0 containers: []
	W1222 00:32:02.346926 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:32:02.346932 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:32:02.347004 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:32:02.373650 1446402 cri.go:96] found id: ""
	I1222 00:32:02.373676 1446402 logs.go:282] 0 containers: []
	W1222 00:32:02.373683 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:32:02.373689 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:32:02.373758 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:32:02.398320 1446402 cri.go:96] found id: ""
	I1222 00:32:02.398334 1446402 logs.go:282] 0 containers: []
	W1222 00:32:02.398341 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:32:02.398347 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:32:02.398416 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:32:02.430114 1446402 cri.go:96] found id: ""
	I1222 00:32:02.430128 1446402 logs.go:282] 0 containers: []
	W1222 00:32:02.430136 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:32:02.430144 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:32:02.430154 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:32:02.485528 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:32:02.485549 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:32:02.501732 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:32:02.501748 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:32:02.566784 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:32:02.558532   15720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:02.559242   15720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:02.560819   15720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:02.561301   15720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:02.562756   15720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:32:02.558532   15720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:02.559242   15720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:02.560819   15720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:02.561301   15720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:02.562756   15720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:32:02.566793 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:32:02.566804 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:32:02.631159 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:32:02.631178 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:32:05.163426 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:32:05.173887 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:32:05.173961 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:32:05.199160 1446402 cri.go:96] found id: ""
	I1222 00:32:05.199174 1446402 logs.go:282] 0 containers: []
	W1222 00:32:05.199181 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:32:05.199187 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:32:05.199257 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:32:05.223620 1446402 cri.go:96] found id: ""
	I1222 00:32:05.223634 1446402 logs.go:282] 0 containers: []
	W1222 00:32:05.223641 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:32:05.223647 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:32:05.223706 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:32:05.248870 1446402 cri.go:96] found id: ""
	I1222 00:32:05.248885 1446402 logs.go:282] 0 containers: []
	W1222 00:32:05.248893 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:32:05.248898 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:32:05.248961 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:32:05.274824 1446402 cri.go:96] found id: ""
	I1222 00:32:05.274839 1446402 logs.go:282] 0 containers: []
	W1222 00:32:05.274846 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:32:05.274851 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:32:05.274910 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:32:05.300225 1446402 cri.go:96] found id: ""
	I1222 00:32:05.300239 1446402 logs.go:282] 0 containers: []
	W1222 00:32:05.300251 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:32:05.300257 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:32:05.300317 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:32:05.324470 1446402 cri.go:96] found id: ""
	I1222 00:32:05.324484 1446402 logs.go:282] 0 containers: []
	W1222 00:32:05.324492 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:32:05.324500 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:32:05.324563 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:32:05.352629 1446402 cri.go:96] found id: ""
	I1222 00:32:05.352647 1446402 logs.go:282] 0 containers: []
	W1222 00:32:05.352655 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:32:05.352666 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:32:05.352677 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:32:05.415991 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:32:05.416014 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:32:05.431828 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:32:05.431845 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:32:05.498339 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:32:05.489677   15826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:05.490403   15826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:05.492216   15826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:05.492835   15826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:05.494413   15826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:32:05.489677   15826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:05.490403   15826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:05.492216   15826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:05.492835   15826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:05.494413   15826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:32:05.498349 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:32:05.498364 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:32:05.563506 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:32:05.563525 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:32:08.094246 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:32:08.105089 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:32:08.105172 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:32:08.132175 1446402 cri.go:96] found id: ""
	I1222 00:32:08.132203 1446402 logs.go:282] 0 containers: []
	W1222 00:32:08.132211 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:32:08.132217 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:32:08.132280 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:32:08.158101 1446402 cri.go:96] found id: ""
	I1222 00:32:08.158115 1446402 logs.go:282] 0 containers: []
	W1222 00:32:08.158122 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:32:08.158128 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:32:08.158204 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:32:08.187238 1446402 cri.go:96] found id: ""
	I1222 00:32:08.187252 1446402 logs.go:282] 0 containers: []
	W1222 00:32:08.187259 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:32:08.187265 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:32:08.187325 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:32:08.211742 1446402 cri.go:96] found id: ""
	I1222 00:32:08.211756 1446402 logs.go:282] 0 containers: []
	W1222 00:32:08.211763 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:32:08.211768 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:32:08.211830 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:32:08.236099 1446402 cri.go:96] found id: ""
	I1222 00:32:08.236113 1446402 logs.go:282] 0 containers: []
	W1222 00:32:08.236120 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:32:08.236126 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:32:08.236199 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:32:08.261393 1446402 cri.go:96] found id: ""
	I1222 00:32:08.261407 1446402 logs.go:282] 0 containers: []
	W1222 00:32:08.261424 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:32:08.261430 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:32:08.261498 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:32:08.288417 1446402 cri.go:96] found id: ""
	I1222 00:32:08.288439 1446402 logs.go:282] 0 containers: []
	W1222 00:32:08.288447 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:32:08.288456 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:32:08.288467 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:32:08.304103 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:32:08.304124 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:32:08.368642 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:32:08.359974   15929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:08.360518   15929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:08.362313   15929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:08.362653   15929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:08.364223   15929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:32:08.359974   15929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:08.360518   15929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:08.362313   15929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:08.362653   15929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:08.364223   15929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:32:08.368652 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:32:08.368663 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:32:08.430523 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:32:08.430543 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:32:08.458205 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:32:08.458222 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:32:11.020855 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:32:11.033129 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:32:11.033201 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:32:11.063371 1446402 cri.go:96] found id: ""
	I1222 00:32:11.063385 1446402 logs.go:282] 0 containers: []
	W1222 00:32:11.063392 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:32:11.063398 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:32:11.063479 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:32:11.089853 1446402 cri.go:96] found id: ""
	I1222 00:32:11.089880 1446402 logs.go:282] 0 containers: []
	W1222 00:32:11.089891 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:32:11.089898 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:32:11.089971 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:32:11.120928 1446402 cri.go:96] found id: ""
	I1222 00:32:11.120943 1446402 logs.go:282] 0 containers: []
	W1222 00:32:11.120971 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:32:11.120978 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:32:11.121045 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:32:11.151464 1446402 cri.go:96] found id: ""
	I1222 00:32:11.151502 1446402 logs.go:282] 0 containers: []
	W1222 00:32:11.151510 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:32:11.151516 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:32:11.151589 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:32:11.179209 1446402 cri.go:96] found id: ""
	I1222 00:32:11.179224 1446402 logs.go:282] 0 containers: []
	W1222 00:32:11.179233 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:32:11.179238 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:32:11.179324 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:32:11.205945 1446402 cri.go:96] found id: ""
	I1222 00:32:11.205979 1446402 logs.go:282] 0 containers: []
	W1222 00:32:11.205987 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:32:11.205993 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:32:11.206065 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:32:11.231928 1446402 cri.go:96] found id: ""
	I1222 00:32:11.231942 1446402 logs.go:282] 0 containers: []
	W1222 00:32:11.231949 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:32:11.231957 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:32:11.231967 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:32:11.296038 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:32:11.296064 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:32:11.312748 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:32:11.312764 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:32:11.378465 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:32:11.369616   16034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:11.370358   16034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:11.372167   16034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:11.372861   16034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:11.374622   16034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:32:11.369616   16034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:11.370358   16034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:11.372167   16034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:11.372861   16034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:11.374622   16034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:32:11.378480 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:32:11.378499 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:32:11.444244 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:32:11.444264 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:32:13.977331 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:32:13.989011 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:32:13.989094 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:32:14.028691 1446402 cri.go:96] found id: ""
	I1222 00:32:14.028726 1446402 logs.go:282] 0 containers: []
	W1222 00:32:14.028734 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:32:14.028739 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:32:14.028810 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:32:14.055710 1446402 cri.go:96] found id: ""
	I1222 00:32:14.055725 1446402 logs.go:282] 0 containers: []
	W1222 00:32:14.055732 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:32:14.055738 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:32:14.055809 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:32:14.082530 1446402 cri.go:96] found id: ""
	I1222 00:32:14.082546 1446402 logs.go:282] 0 containers: []
	W1222 00:32:14.082553 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:32:14.082559 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:32:14.082625 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:32:14.107817 1446402 cri.go:96] found id: ""
	I1222 00:32:14.107840 1446402 logs.go:282] 0 containers: []
	W1222 00:32:14.107847 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:32:14.107853 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:32:14.107913 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:32:14.136680 1446402 cri.go:96] found id: ""
	I1222 00:32:14.136695 1446402 logs.go:282] 0 containers: []
	W1222 00:32:14.136701 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:32:14.136707 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:32:14.136767 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:32:14.161938 1446402 cri.go:96] found id: ""
	I1222 00:32:14.161961 1446402 logs.go:282] 0 containers: []
	W1222 00:32:14.161968 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:32:14.161974 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:32:14.162041 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:32:14.186794 1446402 cri.go:96] found id: ""
	I1222 00:32:14.186808 1446402 logs.go:282] 0 containers: []
	W1222 00:32:14.186814 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:32:14.186823 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:32:14.186832 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:32:14.242688 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:32:14.242708 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:32:14.259715 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:32:14.259732 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:32:14.326979 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:32:14.317786   16135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:14.319196   16135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:14.319865   16135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:14.321398   16135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:14.321854   16135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:32:14.317786   16135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:14.319196   16135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:14.319865   16135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:14.321398   16135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:14.321854   16135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:32:14.326990 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:32:14.327002 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:32:14.395678 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:32:14.395705 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:32:16.929785 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:32:16.940545 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:32:16.940609 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:32:16.965350 1446402 cri.go:96] found id: ""
	I1222 00:32:16.965365 1446402 logs.go:282] 0 containers: []
	W1222 00:32:16.965372 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:32:16.965378 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:32:16.965441 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:32:17.001431 1446402 cri.go:96] found id: ""
	I1222 00:32:17.001447 1446402 logs.go:282] 0 containers: []
	W1222 00:32:17.001455 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:32:17.001461 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:32:17.001530 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:32:17.045444 1446402 cri.go:96] found id: ""
	I1222 00:32:17.045459 1446402 logs.go:282] 0 containers: []
	W1222 00:32:17.045466 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:32:17.045472 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:32:17.045531 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:32:17.080407 1446402 cri.go:96] found id: ""
	I1222 00:32:17.080422 1446402 logs.go:282] 0 containers: []
	W1222 00:32:17.080429 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:32:17.080435 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:32:17.080500 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:32:17.107785 1446402 cri.go:96] found id: ""
	I1222 00:32:17.107799 1446402 logs.go:282] 0 containers: []
	W1222 00:32:17.107806 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:32:17.107812 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:32:17.107874 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:32:17.133084 1446402 cri.go:96] found id: ""
	I1222 00:32:17.133099 1446402 logs.go:282] 0 containers: []
	W1222 00:32:17.133106 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:32:17.133112 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:32:17.133170 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:32:17.162200 1446402 cri.go:96] found id: ""
	I1222 00:32:17.162215 1446402 logs.go:282] 0 containers: []
	W1222 00:32:17.162222 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:32:17.162232 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:32:17.162243 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:32:17.220080 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:32:17.220098 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:32:17.235955 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:32:17.235971 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:32:17.302399 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:32:17.293671   16241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:17.294273   16241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:17.296022   16241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:17.296566   16241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:17.298287   16241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:32:17.293671   16241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:17.294273   16241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:17.296022   16241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:17.296566   16241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:17.298287   16241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:32:17.302410 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:32:17.302420 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:32:17.365559 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:32:17.365578 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:32:19.896945 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:32:19.907830 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:32:19.907900 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:32:19.933463 1446402 cri.go:96] found id: ""
	I1222 00:32:19.933478 1446402 logs.go:282] 0 containers: []
	W1222 00:32:19.933485 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:32:19.933490 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:32:19.933556 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:32:19.958969 1446402 cri.go:96] found id: ""
	I1222 00:32:19.958983 1446402 logs.go:282] 0 containers: []
	W1222 00:32:19.958990 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:32:19.958996 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:32:19.959057 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:32:19.984725 1446402 cri.go:96] found id: ""
	I1222 00:32:19.984740 1446402 logs.go:282] 0 containers: []
	W1222 00:32:19.984748 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:32:19.984753 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:32:19.984819 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:32:20.030303 1446402 cri.go:96] found id: ""
	I1222 00:32:20.030318 1446402 logs.go:282] 0 containers: []
	W1222 00:32:20.030326 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:32:20.030332 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:32:20.030400 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:32:20.067239 1446402 cri.go:96] found id: ""
	I1222 00:32:20.067254 1446402 logs.go:282] 0 containers: []
	W1222 00:32:20.067262 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:32:20.067268 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:32:20.067336 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:32:20.094147 1446402 cri.go:96] found id: ""
	I1222 00:32:20.094161 1446402 logs.go:282] 0 containers: []
	W1222 00:32:20.094169 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:32:20.094174 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:32:20.094236 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:32:20.120347 1446402 cri.go:96] found id: ""
	I1222 00:32:20.120361 1446402 logs.go:282] 0 containers: []
	W1222 00:32:20.120369 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:32:20.120377 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:32:20.120387 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:32:20.192596 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:32:20.183539   16339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:20.184471   16339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:20.186253   16339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:20.186801   16339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:20.188449   16339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:32:20.183539   16339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:20.184471   16339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:20.186253   16339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:20.186801   16339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:20.188449   16339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:32:20.192608 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:32:20.192620 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:32:20.255011 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:32:20.255031 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:32:20.288327 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:32:20.288344 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:32:20.347178 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:32:20.347196 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:32:22.863692 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:32:22.873845 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:32:22.873915 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:32:22.898717 1446402 cri.go:96] found id: ""
	I1222 00:32:22.898737 1446402 logs.go:282] 0 containers: []
	W1222 00:32:22.898744 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:32:22.898749 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:32:22.898808 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:32:22.923719 1446402 cri.go:96] found id: ""
	I1222 00:32:22.923734 1446402 logs.go:282] 0 containers: []
	W1222 00:32:22.923741 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:32:22.923746 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:32:22.923806 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:32:22.953819 1446402 cri.go:96] found id: ""
	I1222 00:32:22.953834 1446402 logs.go:282] 0 containers: []
	W1222 00:32:22.953841 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:32:22.953847 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:32:22.953908 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:32:22.977769 1446402 cri.go:96] found id: ""
	I1222 00:32:22.977783 1446402 logs.go:282] 0 containers: []
	W1222 00:32:22.977791 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:32:22.977796 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:32:22.977858 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:32:23.011333 1446402 cri.go:96] found id: ""
	I1222 00:32:23.011348 1446402 logs.go:282] 0 containers: []
	W1222 00:32:23.011355 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:32:23.011361 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:32:23.011426 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:32:23.040887 1446402 cri.go:96] found id: ""
	I1222 00:32:23.040900 1446402 logs.go:282] 0 containers: []
	W1222 00:32:23.040907 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:32:23.040913 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:32:23.040973 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:32:23.070583 1446402 cri.go:96] found id: ""
	I1222 00:32:23.070597 1446402 logs.go:282] 0 containers: []
	W1222 00:32:23.070604 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:32:23.070612 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:32:23.070622 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:32:23.087115 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:32:23.087132 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:32:23.152903 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:32:23.144713   16447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:23.145341   16447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:23.146893   16447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:23.147470   16447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:23.149031   16447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:32:23.144713   16447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:23.145341   16447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:23.146893   16447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:23.147470   16447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:23.149031   16447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:32:23.152913 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:32:23.152924 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:32:23.215824 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:32:23.215846 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:32:23.249147 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:32:23.249175 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:32:25.810217 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:32:25.820952 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:32:25.821015 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:32:25.847989 1446402 cri.go:96] found id: ""
	I1222 00:32:25.848004 1446402 logs.go:282] 0 containers: []
	W1222 00:32:25.848011 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:32:25.848016 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:32:25.848091 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:32:25.877243 1446402 cri.go:96] found id: ""
	I1222 00:32:25.877258 1446402 logs.go:282] 0 containers: []
	W1222 00:32:25.877265 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:32:25.877271 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:32:25.877332 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:32:25.902255 1446402 cri.go:96] found id: ""
	I1222 00:32:25.902271 1446402 logs.go:282] 0 containers: []
	W1222 00:32:25.902278 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:32:25.902283 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:32:25.902344 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:32:25.927468 1446402 cri.go:96] found id: ""
	I1222 00:32:25.927482 1446402 logs.go:282] 0 containers: []
	W1222 00:32:25.927489 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:32:25.927495 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:32:25.927559 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:32:25.957558 1446402 cri.go:96] found id: ""
	I1222 00:32:25.957571 1446402 logs.go:282] 0 containers: []
	W1222 00:32:25.957578 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:32:25.957583 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:32:25.957644 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:32:25.982483 1446402 cri.go:96] found id: ""
	I1222 00:32:25.982509 1446402 logs.go:282] 0 containers: []
	W1222 00:32:25.982517 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:32:25.982523 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:32:25.982599 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:32:26.024676 1446402 cri.go:96] found id: ""
	I1222 00:32:26.024691 1446402 logs.go:282] 0 containers: []
	W1222 00:32:26.024698 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:32:26.024706 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:32:26.024724 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:32:26.087946 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:32:26.087968 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:32:26.105041 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:32:26.105066 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:32:26.171303 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:32:26.162481   16554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:26.163042   16554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:26.164767   16554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:26.165348   16554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:26.166856   16554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:32:26.162481   16554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:26.163042   16554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:26.164767   16554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:26.165348   16554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:26.166856   16554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:32:26.171313 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:32:26.171324 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:32:26.239046 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:32:26.239065 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:32:28.769012 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:32:28.779505 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:32:28.779566 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:32:28.804277 1446402 cri.go:96] found id: ""
	I1222 00:32:28.804291 1446402 logs.go:282] 0 containers: []
	W1222 00:32:28.804298 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:32:28.804303 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:32:28.804364 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:32:28.831914 1446402 cri.go:96] found id: ""
	I1222 00:32:28.831927 1446402 logs.go:282] 0 containers: []
	W1222 00:32:28.831935 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:32:28.831940 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:32:28.831999 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:32:28.858930 1446402 cri.go:96] found id: ""
	I1222 00:32:28.858951 1446402 logs.go:282] 0 containers: []
	W1222 00:32:28.858959 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:32:28.858964 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:32:28.859026 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:32:28.884503 1446402 cri.go:96] found id: ""
	I1222 00:32:28.884517 1446402 logs.go:282] 0 containers: []
	W1222 00:32:28.884524 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:32:28.884529 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:32:28.884588 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:32:28.908385 1446402 cri.go:96] found id: ""
	I1222 00:32:28.908399 1446402 logs.go:282] 0 containers: []
	W1222 00:32:28.908406 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:32:28.908412 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:32:28.908471 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:32:28.932216 1446402 cri.go:96] found id: ""
	I1222 00:32:28.932231 1446402 logs.go:282] 0 containers: []
	W1222 00:32:28.932238 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:32:28.932243 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:32:28.932318 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:32:28.960692 1446402 cri.go:96] found id: ""
	I1222 00:32:28.960706 1446402 logs.go:282] 0 containers: []
	W1222 00:32:28.960714 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:32:28.960721 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:32:28.960732 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:32:28.991268 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:32:28.991284 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:32:29.051794 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:32:29.051812 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:32:29.076793 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:32:29.076809 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:32:29.140856 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:32:29.132991   16669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:29.133597   16669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:29.135220   16669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:29.135674   16669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:29.137107   16669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:32:29.132991   16669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:29.133597   16669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:29.135220   16669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:29.135674   16669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:29.137107   16669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:32:29.140866 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:32:29.140877 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:32:31.704016 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:32:31.714529 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:32:31.714593 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:32:31.739665 1446402 cri.go:96] found id: ""
	I1222 00:32:31.739679 1446402 logs.go:282] 0 containers: []
	W1222 00:32:31.739687 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:32:31.739693 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:32:31.739753 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:32:31.764377 1446402 cri.go:96] found id: ""
	I1222 00:32:31.764391 1446402 logs.go:282] 0 containers: []
	W1222 00:32:31.764399 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:32:31.764404 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:32:31.764465 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:32:31.793617 1446402 cri.go:96] found id: ""
	I1222 00:32:31.793631 1446402 logs.go:282] 0 containers: []
	W1222 00:32:31.793638 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:32:31.793644 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:32:31.793709 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:32:31.818025 1446402 cri.go:96] found id: ""
	I1222 00:32:31.818040 1446402 logs.go:282] 0 containers: []
	W1222 00:32:31.818047 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:32:31.818055 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:32:31.818145 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:32:31.848262 1446402 cri.go:96] found id: ""
	I1222 00:32:31.848277 1446402 logs.go:282] 0 containers: []
	W1222 00:32:31.848285 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:32:31.848293 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:32:31.848357 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:32:31.873649 1446402 cri.go:96] found id: ""
	I1222 00:32:31.873663 1446402 logs.go:282] 0 containers: []
	W1222 00:32:31.873670 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:32:31.873676 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:32:31.873739 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:32:31.898375 1446402 cri.go:96] found id: ""
	I1222 00:32:31.898390 1446402 logs.go:282] 0 containers: []
	W1222 00:32:31.898397 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:32:31.898404 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:32:31.898416 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:32:31.955541 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:32:31.955560 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:32:31.971557 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:32:31.971574 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:32:32.067449 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:32:32.057015   16758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:32.057465   16758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:32.059124   16758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:32.060199   16758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:32.063114   16758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:32:32.057015   16758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:32.057465   16758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:32.059124   16758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:32.060199   16758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:32.063114   16758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:32:32.067459 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:32:32.067469 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:32:32.129846 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:32:32.129865 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:32:34.659453 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:32:34.669625 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:32:34.669685 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:32:34.696885 1446402 cri.go:96] found id: ""
	I1222 00:32:34.696900 1446402 logs.go:282] 0 containers: []
	W1222 00:32:34.696907 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:32:34.696912 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:32:34.696972 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:32:34.721026 1446402 cri.go:96] found id: ""
	I1222 00:32:34.721050 1446402 logs.go:282] 0 containers: []
	W1222 00:32:34.721058 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:32:34.721063 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:32:34.721133 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:32:34.745654 1446402 cri.go:96] found id: ""
	I1222 00:32:34.745669 1446402 logs.go:282] 0 containers: []
	W1222 00:32:34.745687 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:32:34.745692 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:32:34.745753 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:32:34.771407 1446402 cri.go:96] found id: ""
	I1222 00:32:34.771422 1446402 logs.go:282] 0 containers: []
	W1222 00:32:34.771429 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:32:34.771434 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:32:34.771502 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:32:34.795734 1446402 cri.go:96] found id: ""
	I1222 00:32:34.795749 1446402 logs.go:282] 0 containers: []
	W1222 00:32:34.795756 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:32:34.795761 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:32:34.795821 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:32:34.824632 1446402 cri.go:96] found id: ""
	I1222 00:32:34.824647 1446402 logs.go:282] 0 containers: []
	W1222 00:32:34.824664 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:32:34.824670 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:32:34.824737 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:32:34.850691 1446402 cri.go:96] found id: ""
	I1222 00:32:34.850705 1446402 logs.go:282] 0 containers: []
	W1222 00:32:34.850713 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:32:34.850721 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:32:34.850732 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:32:34.923721 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:32:34.914435   16859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:34.915282   16859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:34.916952   16859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:34.917512   16859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:34.919258   16859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:32:34.914435   16859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:34.915282   16859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:34.916952   16859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:34.917512   16859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:34.919258   16859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:32:34.923732 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:32:34.923743 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:32:34.988429 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:32:34.988447 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:32:35.032884 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:32:35.032901 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:32:35.094822 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:32:35.094842 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:32:37.611964 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:32:37.625103 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:32:37.625168 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:32:37.652713 1446402 cri.go:96] found id: ""
	I1222 00:32:37.652727 1446402 logs.go:282] 0 containers: []
	W1222 00:32:37.652734 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:32:37.652740 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:32:37.652805 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:32:37.677907 1446402 cri.go:96] found id: ""
	I1222 00:32:37.677921 1446402 logs.go:282] 0 containers: []
	W1222 00:32:37.677928 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:32:37.677934 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:32:37.677996 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:32:37.706882 1446402 cri.go:96] found id: ""
	I1222 00:32:37.706901 1446402 logs.go:282] 0 containers: []
	W1222 00:32:37.706909 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:32:37.706914 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:32:37.706973 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:32:37.734381 1446402 cri.go:96] found id: ""
	I1222 00:32:37.734396 1446402 logs.go:282] 0 containers: []
	W1222 00:32:37.734403 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:32:37.734408 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:32:37.734468 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:32:37.763444 1446402 cri.go:96] found id: ""
	I1222 00:32:37.763464 1446402 logs.go:282] 0 containers: []
	W1222 00:32:37.763483 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:32:37.763489 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:32:37.763559 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:32:37.789695 1446402 cri.go:96] found id: ""
	I1222 00:32:37.789718 1446402 logs.go:282] 0 containers: []
	W1222 00:32:37.789726 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:32:37.789732 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:32:37.789805 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:32:37.818949 1446402 cri.go:96] found id: ""
	I1222 00:32:37.818963 1446402 logs.go:282] 0 containers: []
	W1222 00:32:37.818970 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:32:37.818977 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:32:37.818989 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:32:37.886829 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:32:37.877811   16963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:37.878764   16963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:37.880313   16963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:37.880782   16963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:37.882319   16963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:32:37.877811   16963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:37.878764   16963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:37.880313   16963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:37.880782   16963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:37.882319   16963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:32:37.886840 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:32:37.886850 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:32:37.953234 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:32:37.953253 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:32:37.982264 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:32:37.982280 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:32:38.049773 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:32:38.049792 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:32:40.567633 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:32:40.577940 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:32:40.578000 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:32:40.602023 1446402 cri.go:96] found id: ""
	I1222 00:32:40.602038 1446402 logs.go:282] 0 containers: []
	W1222 00:32:40.602045 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:32:40.602051 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:32:40.602145 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:32:40.630778 1446402 cri.go:96] found id: ""
	I1222 00:32:40.630802 1446402 logs.go:282] 0 containers: []
	W1222 00:32:40.630810 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:32:40.630816 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:32:40.630877 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:32:40.658578 1446402 cri.go:96] found id: ""
	I1222 00:32:40.658592 1446402 logs.go:282] 0 containers: []
	W1222 00:32:40.658599 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:32:40.658605 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:32:40.658669 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:32:40.686369 1446402 cri.go:96] found id: ""
	I1222 00:32:40.686384 1446402 logs.go:282] 0 containers: []
	W1222 00:32:40.686393 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:32:40.686399 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:32:40.686466 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:32:40.712486 1446402 cri.go:96] found id: ""
	I1222 00:32:40.712501 1446402 logs.go:282] 0 containers: []
	W1222 00:32:40.712509 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:32:40.712514 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:32:40.712580 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:32:40.744516 1446402 cri.go:96] found id: ""
	I1222 00:32:40.744531 1446402 logs.go:282] 0 containers: []
	W1222 00:32:40.744538 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:32:40.744544 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:32:40.744609 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:32:40.770724 1446402 cri.go:96] found id: ""
	I1222 00:32:40.770738 1446402 logs.go:282] 0 containers: []
	W1222 00:32:40.770745 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:32:40.770754 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:32:40.770766 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:32:40.787581 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:32:40.787598 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:32:40.853257 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:32:40.844118   17072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:40.845204   17072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:40.846833   17072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:40.847402   17072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:40.849021   17072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:32:40.844118   17072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:40.845204   17072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:40.846833   17072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:40.847402   17072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:40.849021   17072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:32:40.853267 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:32:40.853279 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:32:40.918705 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:32:40.918728 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:32:40.947006 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:32:40.947022 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:32:43.505746 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:32:43.515847 1446402 kubeadm.go:602] duration metric: took 4m1.800425441s to restartPrimaryControlPlane
	W1222 00:32:43.515910 1446402 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1222 00:32:43.515983 1446402 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1222 00:32:43.923830 1446402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 00:32:43.937721 1446402 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1222 00:32:43.945799 1446402 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1222 00:32:43.945856 1446402 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 00:32:43.953730 1446402 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1222 00:32:43.953738 1446402 kubeadm.go:158] found existing configuration files:
	
	I1222 00:32:43.953790 1446402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1222 00:32:43.962117 1446402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1222 00:32:43.962172 1446402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1222 00:32:43.969797 1446402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1222 00:32:43.977738 1446402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1222 00:32:43.977798 1446402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 00:32:43.986214 1446402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1222 00:32:43.994326 1446402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1222 00:32:43.994386 1446402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 00:32:44.004154 1446402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1222 00:32:44.013730 1446402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1222 00:32:44.013800 1446402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 00:32:44.022121 1446402 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1222 00:32:44.061736 1446402 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1222 00:32:44.061785 1446402 kubeadm.go:319] [preflight] Running pre-flight checks
	I1222 00:32:44.140713 1446402 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1222 00:32:44.140778 1446402 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1222 00:32:44.140818 1446402 kubeadm.go:319] OS: Linux
	I1222 00:32:44.140862 1446402 kubeadm.go:319] CGROUPS_CPU: enabled
	I1222 00:32:44.140909 1446402 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1222 00:32:44.140955 1446402 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1222 00:32:44.141002 1446402 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1222 00:32:44.141048 1446402 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1222 00:32:44.141095 1446402 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1222 00:32:44.141140 1446402 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1222 00:32:44.141187 1446402 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1222 00:32:44.141232 1446402 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1222 00:32:44.208774 1446402 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1222 00:32:44.208878 1446402 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1222 00:32:44.208966 1446402 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1222 00:32:44.214899 1446402 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1222 00:32:44.218610 1446402 out.go:252]   - Generating certificates and keys ...
	I1222 00:32:44.218748 1446402 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1222 00:32:44.218821 1446402 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1222 00:32:44.218895 1446402 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1222 00:32:44.218955 1446402 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1222 00:32:44.219024 1446402 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1222 00:32:44.219076 1446402 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1222 00:32:44.219138 1446402 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1222 00:32:44.219198 1446402 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1222 00:32:44.219270 1446402 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1222 00:32:44.219343 1446402 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1222 00:32:44.219380 1446402 kubeadm.go:319] [certs] Using the existing "sa" key
	I1222 00:32:44.219458 1446402 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1222 00:32:44.443111 1446402 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1222 00:32:44.602435 1446402 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1222 00:32:44.699769 1446402 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1222 00:32:44.991502 1446402 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1222 00:32:45.160573 1446402 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1222 00:32:45.170594 1446402 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1222 00:32:45.170674 1446402 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1222 00:32:45.173883 1446402 out.go:252]   - Booting up control plane ...
	I1222 00:32:45.174024 1446402 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1222 00:32:45.174124 1446402 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1222 00:32:45.175745 1446402 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1222 00:32:45.208642 1446402 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1222 00:32:45.208749 1446402 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1222 00:32:45.228521 1446402 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1222 00:32:45.228620 1446402 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1222 00:32:45.228659 1446402 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1222 00:32:45.414555 1446402 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1222 00:32:45.414668 1446402 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1222 00:36:45.414312 1446402 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00033138s
	I1222 00:36:45.414339 1446402 kubeadm.go:319] 
	I1222 00:36:45.414437 1446402 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1222 00:36:45.414497 1446402 kubeadm.go:319] 	- The kubelet is not running
	I1222 00:36:45.414614 1446402 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1222 00:36:45.414622 1446402 kubeadm.go:319] 
	I1222 00:36:45.414721 1446402 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1222 00:36:45.414751 1446402 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1222 00:36:45.414780 1446402 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1222 00:36:45.414783 1446402 kubeadm.go:319] 
	I1222 00:36:45.419351 1446402 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1222 00:36:45.419863 1446402 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1222 00:36:45.420008 1446402 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1222 00:36:45.420300 1446402 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1222 00:36:45.420306 1446402 kubeadm.go:319] 
	I1222 00:36:45.420408 1446402 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1222 00:36:45.420558 1446402 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00033138s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1222 00:36:45.420656 1446402 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1222 00:36:45.827625 1446402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 00:36:45.841758 1446402 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1222 00:36:45.841815 1446402 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 00:36:45.850297 1446402 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1222 00:36:45.850306 1446402 kubeadm.go:158] found existing configuration files:
	
	I1222 00:36:45.850362 1446402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1222 00:36:45.858548 1446402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1222 00:36:45.858613 1446402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1222 00:36:45.866403 1446402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1222 00:36:45.875159 1446402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1222 00:36:45.875216 1446402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 00:36:45.883092 1446402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1222 00:36:45.891274 1446402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1222 00:36:45.891330 1446402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 00:36:45.899439 1446402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1222 00:36:45.907618 1446402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1222 00:36:45.907680 1446402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 00:36:45.915873 1446402 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1222 00:36:45.954554 1446402 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1222 00:36:45.954640 1446402 kubeadm.go:319] [preflight] Running pre-flight checks
	I1222 00:36:46.034225 1446402 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1222 00:36:46.034294 1446402 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1222 00:36:46.034329 1446402 kubeadm.go:319] OS: Linux
	I1222 00:36:46.034372 1446402 kubeadm.go:319] CGROUPS_CPU: enabled
	I1222 00:36:46.034419 1446402 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1222 00:36:46.034466 1446402 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1222 00:36:46.034512 1446402 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1222 00:36:46.034571 1446402 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1222 00:36:46.034626 1446402 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1222 00:36:46.034679 1446402 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1222 00:36:46.034746 1446402 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1222 00:36:46.034795 1446402 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1222 00:36:46.102483 1446402 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1222 00:36:46.102587 1446402 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1222 00:36:46.102678 1446402 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1222 00:36:46.110548 1446402 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1222 00:36:46.114145 1446402 out.go:252]   - Generating certificates and keys ...
	I1222 00:36:46.114232 1446402 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1222 00:36:46.114297 1446402 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1222 00:36:46.114378 1446402 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1222 00:36:46.114438 1446402 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1222 00:36:46.114552 1446402 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1222 00:36:46.114617 1446402 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1222 00:36:46.114681 1446402 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1222 00:36:46.114756 1446402 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1222 00:36:46.114832 1446402 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1222 00:36:46.114915 1446402 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1222 00:36:46.114959 1446402 kubeadm.go:319] [certs] Using the existing "sa" key
	I1222 00:36:46.115024 1446402 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1222 00:36:46.590004 1446402 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1222 00:36:46.981109 1446402 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1222 00:36:47.331562 1446402 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1222 00:36:47.513275 1446402 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1222 00:36:48.017649 1446402 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1222 00:36:48.018361 1446402 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1222 00:36:48.020999 1446402 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1222 00:36:48.024119 1446402 out.go:252]   - Booting up control plane ...
	I1222 00:36:48.024221 1446402 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1222 00:36:48.024298 1446402 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1222 00:36:48.024363 1446402 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1222 00:36:48.046779 1446402 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1222 00:36:48.047056 1446402 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1222 00:36:48.054716 1446402 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1222 00:36:48.055076 1446402 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1222 00:36:48.055127 1446402 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1222 00:36:48.190129 1446402 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1222 00:36:48.190242 1446402 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1222 00:40:48.190377 1446402 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000238043s
	I1222 00:40:48.190402 1446402 kubeadm.go:319] 
	I1222 00:40:48.190458 1446402 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1222 00:40:48.190495 1446402 kubeadm.go:319] 	- The kubelet is not running
	I1222 00:40:48.190599 1446402 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1222 00:40:48.190604 1446402 kubeadm.go:319] 
	I1222 00:40:48.190706 1446402 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1222 00:40:48.190737 1446402 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1222 00:40:48.190766 1446402 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1222 00:40:48.190769 1446402 kubeadm.go:319] 
	I1222 00:40:48.196227 1446402 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1222 00:40:48.196675 1446402 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1222 00:40:48.196785 1446402 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1222 00:40:48.197020 1446402 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1222 00:40:48.197025 1446402 kubeadm.go:319] 
	I1222 00:40:48.197092 1446402 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1222 00:40:48.197152 1446402 kubeadm.go:403] duration metric: took 12m6.51958097s to StartCluster
	I1222 00:40:48.197184 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:40:48.197246 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:40:48.222444 1446402 cri.go:96] found id: ""
	I1222 00:40:48.222459 1446402 logs.go:282] 0 containers: []
	W1222 00:40:48.222466 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:40:48.222472 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:40:48.222536 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:40:48.256342 1446402 cri.go:96] found id: ""
	I1222 00:40:48.256356 1446402 logs.go:282] 0 containers: []
	W1222 00:40:48.256363 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:40:48.256368 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:40:48.256430 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:40:48.285108 1446402 cri.go:96] found id: ""
	I1222 00:40:48.285122 1446402 logs.go:282] 0 containers: []
	W1222 00:40:48.285129 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:40:48.285135 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:40:48.285196 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:40:48.317753 1446402 cri.go:96] found id: ""
	I1222 00:40:48.317768 1446402 logs.go:282] 0 containers: []
	W1222 00:40:48.317775 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:40:48.317780 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:40:48.317842 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:40:48.347674 1446402 cri.go:96] found id: ""
	I1222 00:40:48.347689 1446402 logs.go:282] 0 containers: []
	W1222 00:40:48.347696 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:40:48.347701 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:40:48.347765 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:40:48.372255 1446402 cri.go:96] found id: ""
	I1222 00:40:48.372268 1446402 logs.go:282] 0 containers: []
	W1222 00:40:48.372275 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:40:48.372281 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:40:48.372339 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:40:48.396691 1446402 cri.go:96] found id: ""
	I1222 00:40:48.396705 1446402 logs.go:282] 0 containers: []
	W1222 00:40:48.396713 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:40:48.396725 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:40:48.396735 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:40:48.455513 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:40:48.455533 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:40:48.471680 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:40:48.471697 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:40:48.541459 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:40:48.532149   20896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:40:48.532736   20896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:40:48.534508   20896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:40:48.535199   20896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:40:48.536956   20896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:40:48.532149   20896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:40:48.532736   20896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:40:48.534508   20896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:40:48.535199   20896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:40:48.536956   20896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:40:48.541473 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:40:48.541483 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:40:48.603413 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:40:48.603432 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 00:40:48.631201 1446402 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000238043s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1222 00:40:48.631242 1446402 out.go:285] * 
	W1222 00:40:48.631304 1446402 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000238043s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1222 00:40:48.631321 1446402 out.go:285] * 
	W1222 00:40:48.633603 1446402 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1222 00:40:48.639700 1446402 out.go:203] 
	W1222 00:40:48.642575 1446402 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000238043s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1222 00:40:48.642620 1446402 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1222 00:40:48.642642 1446402 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1222 00:40:48.645844 1446402 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.248656814Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.248726812Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.248818752Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.248887126Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.248959487Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.249024218Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.249082229Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.249153910Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.249223252Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.249308890Z" level=info msg="Connect containerd service"
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.249702304Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.252215911Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.272726589Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.273135610Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.272971801Z" level=info msg="Start subscribing containerd event"
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.273361942Z" level=info msg="Start recovering state"
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.330860881Z" level=info msg="Start event monitor"
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.331048714Z" level=info msg="Start cni network conf syncer for default"
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.331117121Z" level=info msg="Start streaming server"
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.331184855Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.331242062Z" level=info msg="runtime interface starting up..."
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.331301705Z" level=info msg="starting plugins..."
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.331364582Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.331577047Z" level=info msg="containerd successfully booted in 0.110567s"
	Dec 22 00:28:40 functional-973657 systemd[1]: Started containerd.service - containerd container runtime.
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:43:18.958049   23260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:43:18.958990   23260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:43:18.960544   23260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:43:18.961100   23260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:43:18.962991   23260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec21 21:36] overlayfs: idmapped layers are currently not supported
	[ +34.220408] overlayfs: idmapped layers are currently not supported
	[Dec21 21:37] overlayfs: idmapped layers are currently not supported
	[Dec21 21:38] overlayfs: idmapped layers are currently not supported
	[Dec21 21:39] overlayfs: idmapped layers are currently not supported
	[ +42.728151] overlayfs: idmapped layers are currently not supported
	[Dec21 21:40] overlayfs: idmapped layers are currently not supported
	[Dec21 21:42] overlayfs: idmapped layers are currently not supported
	[Dec21 21:43] overlayfs: idmapped layers are currently not supported
	[Dec21 22:01] overlayfs: idmapped layers are currently not supported
	[Dec21 22:03] overlayfs: idmapped layers are currently not supported
	[Dec21 22:04] overlayfs: idmapped layers are currently not supported
	[Dec21 22:06] overlayfs: idmapped layers are currently not supported
	[Dec21 22:07] overlayfs: idmapped layers are currently not supported
	[Dec21 22:09] kauditd_printk_skb: 8 callbacks suppressed
	[Dec21 22:19] FS-Cache: Duplicate cookie detected
	[  +0.000799] FS-Cache: O-cookie c=000001b7 [p=00000002 fl=222 nc=0 na=1]
	[  +0.000997] FS-Cache: O-cookie d=000000006644c6a1{9P.session} n=0000000059d48210
	[  +0.001156] FS-Cache: O-key=[10] '34333231303139373837'
	[  +0.000780] FS-Cache: N-cookie c=000001b8 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000956] FS-Cache: N-cookie d=000000006644c6a1{9P.session} n=000000007a8030ee
	[  +0.001187] FS-Cache: N-key=[10] '34333231303139373837'
	[Dec22 00:03] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 00:43:19 up 1 day,  7:25,  0 user,  load average: 0.66, 0.33, 0.51
	Linux functional-973657 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 22 00:43:15 functional-973657 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 00:43:16 functional-973657 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 516.
	Dec 22 00:43:16 functional-973657 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:43:16 functional-973657 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:43:16 functional-973657 kubelet[23082]: E1222 00:43:16.301806   23082 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 00:43:16 functional-973657 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 00:43:16 functional-973657 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 00:43:16 functional-973657 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 517.
	Dec 22 00:43:16 functional-973657 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:43:16 functional-973657 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:43:17 functional-973657 kubelet[23117]: E1222 00:43:17.045273   23117 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 00:43:17 functional-973657 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 00:43:17 functional-973657 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 00:43:17 functional-973657 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 518.
	Dec 22 00:43:17 functional-973657 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:43:17 functional-973657 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:43:17 functional-973657 kubelet[23155]: E1222 00:43:17.800230   23155 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 00:43:17 functional-973657 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 00:43:17 functional-973657 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 00:43:18 functional-973657 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 519.
	Dec 22 00:43:18 functional-973657 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:43:18 functional-973657 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:43:18 functional-973657 kubelet[23176]: E1222 00:43:18.555052   23176 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 00:43:18 functional-973657 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 00:43:18 functional-973657 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-973657 -n functional-973657
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-973657 -n functional-973657: exit status 2 (369.946668ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-973657" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd (3.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect (2.45s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-973657 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1636: (dbg) Non-zero exit: kubectl --context functional-973657 create deployment hello-node-connect --image kicbase/echo-server: exit status 1 (58.971029ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1638: failed to create hello-node deployment with this command "kubectl --context functional-973657 create deployment hello-node-connect --image kicbase/echo-server": exit status 1.
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-973657 describe po hello-node-connect
functional_test.go:1612: (dbg) Non-zero exit: kubectl --context functional-973657 describe po hello-node-connect: exit status 1 (58.094295ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1614: "kubectl --context functional-973657 describe po hello-node-connect" failed: exit status 1
functional_test.go:1616: hello-node pod describe:
functional_test.go:1618: (dbg) Run:  kubectl --context functional-973657 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-973657 logs -l app=hello-node-connect: exit status 1 (58.591883ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-973657 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-973657 describe svc hello-node-connect
functional_test.go:1624: (dbg) Non-zero exit: kubectl --context functional-973657 describe svc hello-node-connect: exit status 1 (62.520938ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1626: "kubectl --context functional-973657 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1628: hello-node svc describe:
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-973657
helpers_test.go:244: (dbg) docker inspect functional-973657:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "66803363da2c02a83814dda1c0764d3abdab5acc630ac08f6a997102221d51a1",
	        "Created": "2025-12-22T00:13:58.968084222Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1435135,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-22T00:13:59.032592154Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:065a636b8735485f57df2b02ed6532902f189a9c5dc304ae0ae68a778e1c9b2c",
	        "ResolvConfPath": "/var/lib/docker/containers/66803363da2c02a83814dda1c0764d3abdab5acc630ac08f6a997102221d51a1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/66803363da2c02a83814dda1c0764d3abdab5acc630ac08f6a997102221d51a1/hostname",
	        "HostsPath": "/var/lib/docker/containers/66803363da2c02a83814dda1c0764d3abdab5acc630ac08f6a997102221d51a1/hosts",
	        "LogPath": "/var/lib/docker/containers/66803363da2c02a83814dda1c0764d3abdab5acc630ac08f6a997102221d51a1/66803363da2c02a83814dda1c0764d3abdab5acc630ac08f6a997102221d51a1-json.log",
	        "Name": "/functional-973657",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-973657:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-973657",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "66803363da2c02a83814dda1c0764d3abdab5acc630ac08f6a997102221d51a1",
	                "LowerDir": "/var/lib/docker/overlay2/5e1ad55fc7940958673405b2a5d9d7701d300a0b94ebc0c871b8eb28331634c7-init/diff:/var/lib/docker/overlay2/fdfc0dccbb3b6766b7981a5669907f62e120bbe774767ccbee34e6115374625f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5e1ad55fc7940958673405b2a5d9d7701d300a0b94ebc0c871b8eb28331634c7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5e1ad55fc7940958673405b2a5d9d7701d300a0b94ebc0c871b8eb28331634c7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5e1ad55fc7940958673405b2a5d9d7701d300a0b94ebc0c871b8eb28331634c7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-973657",
	                "Source": "/var/lib/docker/volumes/functional-973657/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-973657",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-973657",
	                "name.minikube.sigs.k8s.io": "functional-973657",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9a7415dc7cc8c69d402ee20ae768f9dea6a8f4f19a78dc21532ae8d42f3e7899",
	            "SandboxKey": "/var/run/docker/netns/9a7415dc7cc8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38390"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38391"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38394"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38392"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38393"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-973657": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ee:06:b1:ad:4a:31",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1e42e4d66b505bfd04d5446c52717be20340997733545e1d4203ef38f80c0dbb",
	                    "EndpointID": "cfb1e4a2d5409c3c16cc85466e1253884fb8124967c81f53a3b06011e2792928",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-973657",
	                        "66803363da2c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-973657 -n functional-973657
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-973657 -n functional-973657: exit status 2 (319.089383ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                             ARGS                                                                             │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cache   │ functional-973657 cache reload                                                                                                                               │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:28 UTC │ 22 Dec 25 00:28 UTC │
	│ ssh     │ functional-973657 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                      │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:28 UTC │ 22 Dec 25 00:28 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                             │ minikube          │ jenkins │ v1.37.0 │ 22 Dec 25 00:28 UTC │ 22 Dec 25 00:28 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                          │ minikube          │ jenkins │ v1.37.0 │ 22 Dec 25 00:28 UTC │ 22 Dec 25 00:28 UTC │
	│ kubectl │ functional-973657 kubectl -- --context functional-973657 get pods                                                                                            │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:28 UTC │                     │
	│ start   │ -p functional-973657 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                                     │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:28 UTC │                     │
	│ cp      │ functional-973657 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                                                           │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:40 UTC │ 22 Dec 25 00:40 UTC │
	│ config  │ functional-973657 config unset cpus                                                                                                                          │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:40 UTC │ 22 Dec 25 00:40 UTC │
	│ config  │ functional-973657 config get cpus                                                                                                                            │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:40 UTC │                     │
	│ config  │ functional-973657 config set cpus 2                                                                                                                          │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:40 UTC │ 22 Dec 25 00:40 UTC │
	│ config  │ functional-973657 config get cpus                                                                                                                            │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:40 UTC │ 22 Dec 25 00:40 UTC │
	│ ssh     │ functional-973657 ssh -n functional-973657 sudo cat /home/docker/cp-test.txt                                                                                 │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:40 UTC │ 22 Dec 25 00:40 UTC │
	│ config  │ functional-973657 config unset cpus                                                                                                                          │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:40 UTC │ 22 Dec 25 00:40 UTC │
	│ config  │ functional-973657 config get cpus                                                                                                                            │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:40 UTC │                     │
	│ ssh     │ functional-973657 ssh echo hello                                                                                                                             │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:40 UTC │ 22 Dec 25 00:40 UTC │
	│ cp      │ functional-973657 cp functional-973657:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelCpCm3337385621/001/cp-test.txt │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:40 UTC │ 22 Dec 25 00:40 UTC │
	│ ssh     │ functional-973657 ssh cat /etc/hostname                                                                                                                      │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:40 UTC │ 22 Dec 25 00:40 UTC │
	│ ssh     │ functional-973657 ssh -n functional-973657 sudo cat /home/docker/cp-test.txt                                                                                 │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:40 UTC │ 22 Dec 25 00:40 UTC │
	│ tunnel  │ functional-973657 tunnel --alsologtostderr                                                                                                                   │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:40 UTC │                     │
	│ tunnel  │ functional-973657 tunnel --alsologtostderr                                                                                                                   │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:40 UTC │                     │
	│ cp      │ functional-973657 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                                                    │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:40 UTC │ 22 Dec 25 00:40 UTC │
	│ tunnel  │ functional-973657 tunnel --alsologtostderr                                                                                                                   │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:40 UTC │                     │
	│ ssh     │ functional-973657 ssh -n functional-973657 sudo cat /tmp/does/not/exist/cp-test.txt                                                                          │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:40 UTC │ 22 Dec 25 00:40 UTC │
	│ addons  │ functional-973657 addons list                                                                                                                                │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │ 22 Dec 25 00:43 UTC │
	│ addons  │ functional-973657 addons list -o json                                                                                                                        │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │ 22 Dec 25 00:43 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/22 00:28:37
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1222 00:28:37.451822 1446402 out.go:360] Setting OutFile to fd 1 ...
	I1222 00:28:37.451933 1446402 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:28:37.451942 1446402 out.go:374] Setting ErrFile to fd 2...
	I1222 00:28:37.451946 1446402 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:28:37.452197 1446402 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1395000/.minikube/bin
	I1222 00:28:37.453530 1446402 out.go:368] Setting JSON to false
	I1222 00:28:37.454369 1446402 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":112270,"bootTime":1766251047,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1222 00:28:37.454418 1446402 start.go:143] virtualization:  
	I1222 00:28:37.457786 1446402 out.go:179] * [functional-973657] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1222 00:28:37.461618 1446402 out.go:179]   - MINIKUBE_LOCATION=22179
	I1222 00:28:37.461721 1446402 notify.go:221] Checking for updates...
	I1222 00:28:37.467381 1446402 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 00:28:37.470438 1446402 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-1395000/kubeconfig
	I1222 00:28:37.473311 1446402 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1395000/.minikube
	I1222 00:28:37.476105 1446402 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1222 00:28:37.479015 1446402 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 00:28:37.482344 1446402 config.go:182] Loaded profile config "functional-973657": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1222 00:28:37.482442 1446402 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 00:28:37.509513 1446402 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1222 00:28:37.509620 1446402 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 00:28:37.577428 1446402 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-22 00:28:37.567598413 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 00:28:37.577529 1446402 docker.go:319] overlay module found
	I1222 00:28:37.580701 1446402 out.go:179] * Using the docker driver based on existing profile
	I1222 00:28:37.583433 1446402 start.go:309] selected driver: docker
	I1222 00:28:37.583443 1446402 start.go:928] validating driver "docker" against &{Name:functional-973657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-973657 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableC
oreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 00:28:37.583549 1446402 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 00:28:37.583656 1446402 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 00:28:37.637869 1446402 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-22 00:28:37.628834862 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 00:28:37.638333 1446402 start_flags.go:995] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1222 00:28:37.638357 1446402 cni.go:84] Creating CNI manager for ""
	I1222 00:28:37.638411 1446402 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1222 00:28:37.638452 1446402 start.go:353] cluster config:
	{Name:functional-973657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-973657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCo
reDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 00:28:37.641536 1446402 out.go:179] * Starting "functional-973657" primary control-plane node in "functional-973657" cluster
	I1222 00:28:37.644340 1446402 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1222 00:28:37.647258 1446402 out.go:179] * Pulling base image v0.0.48-1766219634-22260 ...
	I1222 00:28:37.650255 1446402 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1222 00:28:37.650391 1446402 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1222 00:28:37.650410 1446402 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4
	I1222 00:28:37.650417 1446402 cache.go:65] Caching tarball of preloaded images
	I1222 00:28:37.650491 1446402 preload.go:251] Found /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1222 00:28:37.650499 1446402 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on containerd
	I1222 00:28:37.650609 1446402 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/config.json ...
	I1222 00:28:37.670527 1446402 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon, skipping pull
	I1222 00:28:37.670540 1446402 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 exists in daemon, skipping load
	I1222 00:28:37.670559 1446402 cache.go:243] Successfully downloaded all kic artifacts
	I1222 00:28:37.670589 1446402 start.go:360] acquireMachinesLock for functional-973657: {Name:mk23c5c2a3abc92310900db50002bad061b76c2b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 00:28:37.670659 1446402 start.go:364] duration metric: took 50.988µs to acquireMachinesLock for "functional-973657"
	I1222 00:28:37.670679 1446402 start.go:96] Skipping create...Using existing machine configuration
	I1222 00:28:37.670683 1446402 fix.go:54] fixHost starting: 
	I1222 00:28:37.670937 1446402 cli_runner.go:164] Run: docker container inspect functional-973657 --format={{.State.Status}}
	I1222 00:28:37.688276 1446402 fix.go:112] recreateIfNeeded on functional-973657: state=Running err=<nil>
	W1222 00:28:37.688299 1446402 fix.go:138] unexpected machine state, will restart: <nil>
	I1222 00:28:37.691627 1446402 out.go:252] * Updating the running docker "functional-973657" container ...
	I1222 00:28:37.691654 1446402 machine.go:94] provisionDockerMachine start ...
	I1222 00:28:37.691736 1446402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
	I1222 00:28:37.709165 1446402 main.go:144] libmachine: Using SSH client type: native
	I1222 00:28:37.709504 1446402 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38390 <nil> <nil>}
	I1222 00:28:37.709511 1446402 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 00:28:37.842221 1446402 main.go:144] libmachine: SSH cmd err, output: <nil>: functional-973657
	
	I1222 00:28:37.842236 1446402 ubuntu.go:182] provisioning hostname "functional-973657"
	I1222 00:28:37.842299 1446402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
	I1222 00:28:37.861944 1446402 main.go:144] libmachine: Using SSH client type: native
	I1222 00:28:37.862401 1446402 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38390 <nil> <nil>}
	I1222 00:28:37.862411 1446402 main.go:144] libmachine: About to run SSH command:
	sudo hostname functional-973657 && echo "functional-973657" | sudo tee /etc/hostname
	I1222 00:28:38.004653 1446402 main.go:144] libmachine: SSH cmd err, output: <nil>: functional-973657
	
	I1222 00:28:38.004757 1446402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
	I1222 00:28:38.029552 1446402 main.go:144] libmachine: Using SSH client type: native
	I1222 00:28:38.029903 1446402 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38390 <nil> <nil>}
	I1222 00:28:38.029921 1446402 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-973657' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-973657/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-973657' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1222 00:28:38.166540 1446402 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 00:28:38.166558 1446402 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22179-1395000/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-1395000/.minikube}
	I1222 00:28:38.166588 1446402 ubuntu.go:190] setting up certificates
	I1222 00:28:38.166605 1446402 provision.go:84] configureAuth start
	I1222 00:28:38.166666 1446402 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-973657
	I1222 00:28:38.184810 1446402 provision.go:143] copyHostCerts
	I1222 00:28:38.184868 1446402 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1395000/.minikube/cert.pem, removing ...
	I1222 00:28:38.184883 1446402 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1395000/.minikube/cert.pem
	I1222 00:28:38.184958 1446402 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-1395000/.minikube/cert.pem (1123 bytes)
	I1222 00:28:38.185063 1446402 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1395000/.minikube/key.pem, removing ...
	I1222 00:28:38.185068 1446402 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1395000/.minikube/key.pem
	I1222 00:28:38.185094 1446402 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-1395000/.minikube/key.pem (1679 bytes)
	I1222 00:28:38.185151 1446402 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.pem, removing ...
	I1222 00:28:38.185154 1446402 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.pem
	I1222 00:28:38.185176 1446402 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.pem (1082 bytes)
	I1222 00:28:38.185228 1446402 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca-key.pem org=jenkins.functional-973657 san=[127.0.0.1 192.168.49.2 functional-973657 localhost minikube]
	I1222 00:28:38.572282 1446402 provision.go:177] copyRemoteCerts
	I1222 00:28:38.572338 1446402 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1222 00:28:38.572378 1446402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
	I1222 00:28:38.590440 1446402 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38390 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/functional-973657/id_rsa Username:docker}
	I1222 00:28:38.686182 1446402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1222 00:28:38.704460 1446402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1222 00:28:38.721777 1446402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1222 00:28:38.739280 1446402 provision.go:87] duration metric: took 572.652959ms to configureAuth
	I1222 00:28:38.739299 1446402 ubuntu.go:206] setting minikube options for container-runtime
	I1222 00:28:38.739484 1446402 config.go:182] Loaded profile config "functional-973657": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1222 00:28:38.739490 1446402 machine.go:97] duration metric: took 1.047830613s to provisionDockerMachine
	I1222 00:28:38.739496 1446402 start.go:293] postStartSetup for "functional-973657" (driver="docker")
	I1222 00:28:38.739506 1446402 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1222 00:28:38.739568 1446402 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1222 00:28:38.739605 1446402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
	I1222 00:28:38.761201 1446402 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38390 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/functional-973657/id_rsa Username:docker}
	I1222 00:28:38.864350 1446402 ssh_runner.go:195] Run: cat /etc/os-release
	I1222 00:28:38.868359 1446402 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1222 00:28:38.868379 1446402 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1222 00:28:38.868390 1446402 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1395000/.minikube/addons for local assets ...
	I1222 00:28:38.868447 1446402 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1395000/.minikube/files for local assets ...
	I1222 00:28:38.868524 1446402 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem -> 13968642.pem in /etc/ssl/certs
	I1222 00:28:38.868598 1446402 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/test/nested/copy/1396864/hosts -> hosts in /etc/test/nested/copy/1396864
	I1222 00:28:38.868641 1446402 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1396864
	I1222 00:28:38.878975 1446402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem --> /etc/ssl/certs/13968642.pem (1708 bytes)
	I1222 00:28:38.897171 1446402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/test/nested/copy/1396864/hosts --> /etc/test/nested/copy/1396864/hosts (40 bytes)
	I1222 00:28:38.915159 1446402 start.go:296] duration metric: took 175.648245ms for postStartSetup
	I1222 00:28:38.915247 1446402 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 00:28:38.915286 1446402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
	I1222 00:28:38.933740 1446402 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38390 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/functional-973657/id_rsa Username:docker}
	I1222 00:28:39.031561 1446402 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1222 00:28:39.036720 1446402 fix.go:56] duration metric: took 1.366028879s for fixHost
	I1222 00:28:39.036736 1446402 start.go:83] releasing machines lock for "functional-973657", held for 1.366069585s
	I1222 00:28:39.036807 1446402 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-973657
	I1222 00:28:39.056063 1446402 ssh_runner.go:195] Run: cat /version.json
	I1222 00:28:39.056131 1446402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
	I1222 00:28:39.056209 1446402 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1222 00:28:39.056284 1446402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
	I1222 00:28:39.084466 1446402 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38390 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/functional-973657/id_rsa Username:docker}
	I1222 00:28:39.086214 1446402 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38390 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/functional-973657/id_rsa Username:docker}
	I1222 00:28:39.182487 1446402 ssh_runner.go:195] Run: systemctl --version
	I1222 00:28:39.277379 1446402 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1222 00:28:39.281860 1446402 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1222 00:28:39.281935 1446402 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1222 00:28:39.290006 1446402 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1222 00:28:39.290021 1446402 start.go:496] detecting cgroup driver to use...
	I1222 00:28:39.290053 1446402 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 00:28:39.290134 1446402 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1222 00:28:39.305829 1446402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1222 00:28:39.319320 1446402 docker.go:218] disabling cri-docker service (if available) ...
	I1222 00:28:39.319374 1446402 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1222 00:28:39.335346 1446402 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1222 00:28:39.349145 1446402 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1222 00:28:39.473478 1446402 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1222 00:28:39.618008 1446402 docker.go:234] disabling docker service ...
	I1222 00:28:39.618090 1446402 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1222 00:28:39.634656 1446402 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1222 00:28:39.647677 1446402 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1222 00:28:39.771400 1446402 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1222 00:28:39.894302 1446402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1222 00:28:39.907014 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 00:28:39.920771 1446402 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1222 00:28:39.929451 1446402 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1222 00:28:39.938829 1446402 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1222 00:28:39.938905 1446402 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1222 00:28:39.947569 1446402 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1222 00:28:39.956482 1446402 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1222 00:28:39.965074 1446402 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1222 00:28:39.973881 1446402 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1222 00:28:39.981977 1446402 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1222 00:28:39.990962 1446402 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1222 00:28:39.999843 1446402 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1222 00:28:40.013571 1446402 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1222 00:28:40.024830 1446402 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1222 00:28:40.034498 1446402 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 00:28:40.154100 1446402 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1222 00:28:40.334682 1446402 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1222 00:28:40.334744 1446402 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1222 00:28:40.338667 1446402 start.go:564] Will wait 60s for crictl version
	I1222 00:28:40.338723 1446402 ssh_runner.go:195] Run: which crictl
	I1222 00:28:40.342335 1446402 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1222 00:28:40.367245 1446402 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.1
	RuntimeApiVersion:  v1
	I1222 00:28:40.367308 1446402 ssh_runner.go:195] Run: containerd --version
	I1222 00:28:40.389012 1446402 ssh_runner.go:195] Run: containerd --version
	I1222 00:28:40.418027 1446402 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.1 ...
	I1222 00:28:40.420898 1446402 cli_runner.go:164] Run: docker network inspect functional-973657 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 00:28:40.437638 1446402 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1222 00:28:40.444854 1446402 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1222 00:28:40.447771 1446402 kubeadm.go:884] updating cluster {Name:functional-973657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-973657 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1222 00:28:40.447915 1446402 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1222 00:28:40.447997 1446402 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 00:28:40.473338 1446402 containerd.go:627] all images are preloaded for containerd runtime.
	I1222 00:28:40.473351 1446402 containerd.go:534] Images already preloaded, skipping extraction
	I1222 00:28:40.473409 1446402 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 00:28:40.498366 1446402 containerd.go:627] all images are preloaded for containerd runtime.
	I1222 00:28:40.498377 1446402 cache_images.go:86] Images are preloaded, skipping loading
	I1222 00:28:40.498383 1446402 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 containerd true true} ...
	I1222 00:28:40.498490 1446402 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-973657 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-973657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1222 00:28:40.498554 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
	I1222 00:28:40.524507 1446402 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1222 00:28:40.524524 1446402 cni.go:84] Creating CNI manager for ""
	I1222 00:28:40.524533 1446402 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1222 00:28:40.524546 1446402 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1222 00:28:40.524568 1446402 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-973657 NodeName:functional-973657 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false Kubelet
ConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1222 00:28:40.524688 1446402 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-973657"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1222 00:28:40.524764 1446402 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1222 00:28:40.533361 1446402 binaries.go:51] Found k8s binaries, skipping transfer
	I1222 00:28:40.533424 1446402 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1222 00:28:40.541244 1446402 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1222 00:28:40.555755 1446402 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1222 00:28:40.568267 1446402 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2085 bytes)
	I1222 00:28:40.581122 1446402 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1222 00:28:40.585058 1446402 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 00:28:40.703120 1446402 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 00:28:40.989767 1446402 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657 for IP: 192.168.49.2
	I1222 00:28:40.989777 1446402 certs.go:195] generating shared ca certs ...
	I1222 00:28:40.989791 1446402 certs.go:227] acquiring lock for ca certs: {Name:mk4e1172e73c8d9b926824a39d7e920772302ed7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:28:40.989935 1446402 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.key
	I1222 00:28:40.989982 1446402 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.key
	I1222 00:28:40.989987 1446402 certs.go:257] generating profile certs ...
	I1222 00:28:40.990067 1446402 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/client.key
	I1222 00:28:40.990138 1446402 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/apiserver.key.ec70d081
	I1222 00:28:40.990175 1446402 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/proxy-client.key
	I1222 00:28:40.990291 1446402 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/1396864.pem (1338 bytes)
	W1222 00:28:40.990321 1446402 certs.go:480] ignoring /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/1396864_empty.pem, impossibly tiny 0 bytes
	I1222 00:28:40.990328 1446402 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca-key.pem (1675 bytes)
	I1222 00:28:40.990354 1446402 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem (1082 bytes)
	I1222 00:28:40.990377 1446402 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem (1123 bytes)
	I1222 00:28:40.990400 1446402 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/key.pem (1679 bytes)
	I1222 00:28:40.990449 1446402 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem (1708 bytes)
	I1222 00:28:40.991096 1446402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1222 00:28:41.014750 1446402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1222 00:28:41.036655 1446402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1222 00:28:41.057901 1446402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1222 00:28:41.075308 1446402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1222 00:28:41.092360 1446402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1222 00:28:41.110513 1446402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1222 00:28:41.128091 1446402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1222 00:28:41.145457 1446402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1222 00:28:41.163271 1446402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/1396864.pem --> /usr/share/ca-certificates/1396864.pem (1338 bytes)
	I1222 00:28:41.181040 1446402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem --> /usr/share/ca-certificates/13968642.pem (1708 bytes)
	I1222 00:28:41.199219 1446402 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1222 00:28:41.211792 1446402 ssh_runner.go:195] Run: openssl version
	I1222 00:28:41.217908 1446402 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:28:41.225276 1446402 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1222 00:28:41.232519 1446402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:28:41.236312 1446402 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 00:04 /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:28:41.236370 1446402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1222 00:28:41.277548 1446402 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1222 00:28:41.285110 1446402 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1396864.pem
	I1222 00:28:41.292519 1446402 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1396864.pem /etc/ssl/certs/1396864.pem
	I1222 00:28:41.300133 1446402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1396864.pem
	I1222 00:28:41.304025 1446402 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 00:13 /usr/share/ca-certificates/1396864.pem
	I1222 00:28:41.304090 1446402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1396864.pem
	I1222 00:28:41.345481 1446402 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1222 00:28:41.353129 1446402 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/13968642.pem
	I1222 00:28:41.360704 1446402 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/13968642.pem /etc/ssl/certs/13968642.pem
	I1222 00:28:41.368364 1446402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13968642.pem
	I1222 00:28:41.372067 1446402 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 00:13 /usr/share/ca-certificates/13968642.pem
	I1222 00:28:41.372146 1446402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13968642.pem
	I1222 00:28:41.413233 1446402 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1222 00:28:41.421216 1446402 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 00:28:41.424941 1446402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1222 00:28:41.465845 1446402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1222 00:28:41.509256 1446402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1222 00:28:41.550176 1446402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1222 00:28:41.591240 1446402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1222 00:28:41.636957 1446402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1222 00:28:41.677583 1446402 kubeadm.go:401] StartCluster: {Name:functional-973657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-973657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 00:28:41.677666 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1222 00:28:41.677732 1446402 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 00:28:41.707257 1446402 cri.go:96] found id: ""
	I1222 00:28:41.707323 1446402 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1222 00:28:41.715403 1446402 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1222 00:28:41.715412 1446402 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1222 00:28:41.715487 1446402 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1222 00:28:41.722811 1446402 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1222 00:28:41.723316 1446402 kubeconfig.go:125] found "functional-973657" server: "https://192.168.49.2:8441"
	I1222 00:28:41.724615 1446402 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1222 00:28:41.732758 1446402 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-22 00:14:06.897851329 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-22 00:28:40.577260246 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1222 00:28:41.732777 1446402 kubeadm.go:1161] stopping kube-system containers ...
	I1222 00:28:41.732788 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1222 00:28:41.732853 1446402 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 00:28:41.777317 1446402 cri.go:96] found id: ""
	I1222 00:28:41.777381 1446402 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1222 00:28:41.795672 1446402 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 00:28:41.803787 1446402 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Dec 22 00:18 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec 22 00:18 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5676 Dec 22 00:18 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Dec 22 00:18 /etc/kubernetes/scheduler.conf
	
	I1222 00:28:41.803861 1446402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1222 00:28:41.811861 1446402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1222 00:28:41.819685 1446402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1222 00:28:41.819741 1446402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 00:28:41.827761 1446402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1222 00:28:41.835493 1446402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1222 00:28:41.835553 1446402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 00:28:41.843556 1446402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1222 00:28:41.851531 1446402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1222 00:28:41.851587 1446402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 00:28:41.860145 1446402 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1222 00:28:41.868219 1446402 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1222 00:28:41.913117 1446402 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1222 00:28:43.003962 1446402 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.090816856s)
	I1222 00:28:43.004040 1446402 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1222 00:28:43.212066 1446402 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1222 00:28:43.273727 1446402 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1222 00:28:43.319285 1446402 api_server.go:52] waiting for apiserver process to appear ...
	I1222 00:28:43.319357 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:43.819515 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:44.319574 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:44.820396 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:45.320627 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:45.819505 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:46.320284 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:46.820238 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:47.320289 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:47.819431 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:48.319438 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:48.820203 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:49.320163 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:49.820253 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:50.320340 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:50.820353 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:51.320143 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:51.819557 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:52.319533 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:52.819532 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:53.319872 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:53.819550 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:54.320283 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:54.820042 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:55.319836 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:55.820287 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:56.320324 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:56.819506 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:57.320256 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:57.820518 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:58.320250 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:58.819713 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:59.319563 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:28:59.820373 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:00.320250 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:00.819558 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:01.320363 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:01.820455 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:02.320264 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:02.820241 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:03.320188 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:03.820211 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:04.319540 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:04.819438 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:05.320247 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:05.820436 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:06.320370 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:06.819539 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:07.319751 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:07.820258 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:08.319764 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:08.820469 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:09.319565 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:09.819562 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:10.319521 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:10.819559 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:11.319690 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:11.819773 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:12.319579 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:12.820346 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:13.320217 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:13.820210 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:14.320172 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:14.819550 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:15.319430 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:15.820196 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:16.319448 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:16.819507 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:17.320526 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:17.819522 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:18.319482 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:18.820476 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:19.319544 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:19.820495 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:20.319558 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:20.820340 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:21.320236 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:21.820518 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:22.319699 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:22.819573 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:23.319567 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:23.819533 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:24.319887 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:24.819624 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:25.320279 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:25.820331 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:26.320411 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:26.819541 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:27.320442 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:27.819562 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:28.319550 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:28.820464 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:29.320504 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:29.819508 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:30.319443 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:30.819528 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:31.319503 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:31.819888 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:32.319676 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:32.819521 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:33.319477 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:33.819820 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:34.319851 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:34.819577 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:35.320381 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:35.820397 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:36.320202 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:36.820411 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:37.319449 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:37.819535 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:38.319499 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:38.820465 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:39.319496 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:39.819562 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:40.319552 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:40.819553 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:41.319757 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:41.820402 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:42.319587 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:42.820218 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:43.320359 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:29:43.320440 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:29:43.346534 1446402 cri.go:96] found id: ""
	I1222 00:29:43.346547 1446402 logs.go:282] 0 containers: []
	W1222 00:29:43.346555 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:29:43.346560 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:29:43.346649 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:29:43.373797 1446402 cri.go:96] found id: ""
	I1222 00:29:43.373813 1446402 logs.go:282] 0 containers: []
	W1222 00:29:43.373820 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:29:43.373825 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:29:43.373887 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:29:43.399270 1446402 cri.go:96] found id: ""
	I1222 00:29:43.399284 1446402 logs.go:282] 0 containers: []
	W1222 00:29:43.399291 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:29:43.399296 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:29:43.399363 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:29:43.423840 1446402 cri.go:96] found id: ""
	I1222 00:29:43.423855 1446402 logs.go:282] 0 containers: []
	W1222 00:29:43.423862 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:29:43.423868 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:29:43.423926 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:29:43.447537 1446402 cri.go:96] found id: ""
	I1222 00:29:43.447551 1446402 logs.go:282] 0 containers: []
	W1222 00:29:43.447558 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:29:43.447564 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:29:43.447626 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:29:43.474001 1446402 cri.go:96] found id: ""
	I1222 00:29:43.474016 1446402 logs.go:282] 0 containers: []
	W1222 00:29:43.474024 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:29:43.474029 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:29:43.474123 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:29:43.502707 1446402 cri.go:96] found id: ""
	I1222 00:29:43.502721 1446402 logs.go:282] 0 containers: []
	W1222 00:29:43.502728 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:29:43.502736 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:29:43.502746 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:29:43.560014 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:29:43.560034 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:29:43.575973 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:29:43.575990 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:29:43.644984 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:29:43.636222   10809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:43.636633   10809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:43.638265   10809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:43.638673   10809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:43.640308   10809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:29:43.636222   10809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:43.636633   10809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:43.638265   10809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:43.638673   10809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:43.640308   10809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:29:43.644996 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:29:43.645007 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:29:43.711821 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:29:43.711841 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:29:46.243876 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:46.255639 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:29:46.255701 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:29:46.285594 1446402 cri.go:96] found id: ""
	I1222 00:29:46.285608 1446402 logs.go:282] 0 containers: []
	W1222 00:29:46.285615 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:29:46.285621 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:29:46.285685 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:29:46.313654 1446402 cri.go:96] found id: ""
	I1222 00:29:46.313669 1446402 logs.go:282] 0 containers: []
	W1222 00:29:46.313676 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:29:46.313694 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:29:46.313755 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:29:46.339799 1446402 cri.go:96] found id: ""
	I1222 00:29:46.339815 1446402 logs.go:282] 0 containers: []
	W1222 00:29:46.339822 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:29:46.339828 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:29:46.339891 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:29:46.365156 1446402 cri.go:96] found id: ""
	I1222 00:29:46.365184 1446402 logs.go:282] 0 containers: []
	W1222 00:29:46.365192 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:29:46.365198 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:29:46.365265 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:29:46.394145 1446402 cri.go:96] found id: ""
	I1222 00:29:46.394159 1446402 logs.go:282] 0 containers: []
	W1222 00:29:46.394167 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:29:46.394172 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:29:46.394233 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:29:46.418776 1446402 cri.go:96] found id: ""
	I1222 00:29:46.418790 1446402 logs.go:282] 0 containers: []
	W1222 00:29:46.418797 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:29:46.418803 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:29:46.418864 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:29:46.442806 1446402 cri.go:96] found id: ""
	I1222 00:29:46.442820 1446402 logs.go:282] 0 containers: []
	W1222 00:29:46.442828 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:29:46.442841 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:29:46.442851 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:29:46.499137 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:29:46.499157 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:29:46.515023 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:29:46.515038 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:29:46.583664 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:29:46.574797   10913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:46.575467   10913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:46.577211   10913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:46.577813   10913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:46.579510   10913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:29:46.574797   10913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:46.575467   10913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:46.577211   10913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:46.577813   10913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:46.579510   10913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:29:46.583675 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:29:46.583687 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:29:46.647550 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:29:46.647569 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:29:49.182538 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:49.192713 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:29:49.192773 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:29:49.216898 1446402 cri.go:96] found id: ""
	I1222 00:29:49.216912 1446402 logs.go:282] 0 containers: []
	W1222 00:29:49.216919 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:29:49.216924 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:29:49.216980 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:29:49.249605 1446402 cri.go:96] found id: ""
	I1222 00:29:49.249618 1446402 logs.go:282] 0 containers: []
	W1222 00:29:49.249626 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:29:49.249631 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:29:49.249690 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:29:49.280524 1446402 cri.go:96] found id: ""
	I1222 00:29:49.280539 1446402 logs.go:282] 0 containers: []
	W1222 00:29:49.280546 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:29:49.280552 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:29:49.280611 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:29:49.311301 1446402 cri.go:96] found id: ""
	I1222 00:29:49.311315 1446402 logs.go:282] 0 containers: []
	W1222 00:29:49.311323 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:29:49.311327 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:29:49.311385 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:29:49.336538 1446402 cri.go:96] found id: ""
	I1222 00:29:49.336551 1446402 logs.go:282] 0 containers: []
	W1222 00:29:49.336559 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:29:49.336564 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:29:49.336624 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:29:49.364232 1446402 cri.go:96] found id: ""
	I1222 00:29:49.364247 1446402 logs.go:282] 0 containers: []
	W1222 00:29:49.364256 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:29:49.364262 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:29:49.364326 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:29:49.388613 1446402 cri.go:96] found id: ""
	I1222 00:29:49.388638 1446402 logs.go:282] 0 containers: []
	W1222 00:29:49.388646 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:29:49.388654 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:29:49.388664 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:29:49.451680 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:29:49.443682   11010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:49.444280   11010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:49.445741   11010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:49.446220   11010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:49.447728   11010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:29:49.443682   11010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:49.444280   11010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:49.445741   11010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:49.446220   11010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:49.447728   11010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:29:49.451690 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:29:49.451701 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:29:49.514558 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:29:49.514577 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:29:49.543077 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:29:49.543095 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:29:49.600979 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:29:49.600997 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:29:52.116977 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:52.127516 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:29:52.127578 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:29:52.154761 1446402 cri.go:96] found id: ""
	I1222 00:29:52.154783 1446402 logs.go:282] 0 containers: []
	W1222 00:29:52.154790 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:29:52.154796 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:29:52.154857 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:29:52.180288 1446402 cri.go:96] found id: ""
	I1222 00:29:52.180303 1446402 logs.go:282] 0 containers: []
	W1222 00:29:52.180310 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:29:52.180316 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:29:52.180376 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:29:52.208439 1446402 cri.go:96] found id: ""
	I1222 00:29:52.208454 1446402 logs.go:282] 0 containers: []
	W1222 00:29:52.208461 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:29:52.208466 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:29:52.208527 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:29:52.233901 1446402 cri.go:96] found id: ""
	I1222 00:29:52.233914 1446402 logs.go:282] 0 containers: []
	W1222 00:29:52.233932 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:29:52.233938 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:29:52.234004 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:29:52.269797 1446402 cri.go:96] found id: ""
	I1222 00:29:52.269821 1446402 logs.go:282] 0 containers: []
	W1222 00:29:52.269829 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:29:52.269835 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:29:52.269901 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:29:52.297204 1446402 cri.go:96] found id: ""
	I1222 00:29:52.297219 1446402 logs.go:282] 0 containers: []
	W1222 00:29:52.297236 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:29:52.297242 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:29:52.297308 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:29:52.326411 1446402 cri.go:96] found id: ""
	I1222 00:29:52.326425 1446402 logs.go:282] 0 containers: []
	W1222 00:29:52.326433 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:29:52.326440 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:29:52.326450 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:29:52.387688 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:29:52.379564   11112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:52.380283   11112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:52.381856   11112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:52.382478   11112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:52.383956   11112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:29:52.379564   11112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:52.380283   11112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:52.381856   11112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:52.382478   11112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:52.383956   11112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:29:52.387700 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:29:52.387716 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:29:52.453506 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:29:52.453524 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:29:52.483252 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:29:52.483269 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:29:52.540786 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:29:52.540804 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:29:55.056509 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:55.067103 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:29:55.067178 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:29:55.093620 1446402 cri.go:96] found id: ""
	I1222 00:29:55.093649 1446402 logs.go:282] 0 containers: []
	W1222 00:29:55.093656 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:29:55.093663 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:29:55.093734 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:29:55.128411 1446402 cri.go:96] found id: ""
	I1222 00:29:55.128424 1446402 logs.go:282] 0 containers: []
	W1222 00:29:55.128432 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:29:55.128436 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:29:55.128504 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:29:55.154633 1446402 cri.go:96] found id: ""
	I1222 00:29:55.154646 1446402 logs.go:282] 0 containers: []
	W1222 00:29:55.154654 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:29:55.154659 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:29:55.154730 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:29:55.181169 1446402 cri.go:96] found id: ""
	I1222 00:29:55.181184 1446402 logs.go:282] 0 containers: []
	W1222 00:29:55.181191 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:29:55.181197 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:29:55.181256 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:29:55.206353 1446402 cri.go:96] found id: ""
	I1222 00:29:55.206367 1446402 logs.go:282] 0 containers: []
	W1222 00:29:55.206374 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:29:55.206379 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:29:55.206439 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:29:55.234930 1446402 cri.go:96] found id: ""
	I1222 00:29:55.234963 1446402 logs.go:282] 0 containers: []
	W1222 00:29:55.234971 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:29:55.234977 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:29:55.235052 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:29:55.269275 1446402 cri.go:96] found id: ""
	I1222 00:29:55.269290 1446402 logs.go:282] 0 containers: []
	W1222 00:29:55.269298 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:29:55.269306 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:29:55.269316 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:29:55.332423 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:29:55.332442 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:29:55.348393 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:29:55.348409 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:29:55.411746 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:29:55.403223   11224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:55.403831   11224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:55.405510   11224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:55.406036   11224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:55.407668   11224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:29:55.403223   11224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:55.403831   11224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:55.405510   11224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:55.406036   11224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:55.407668   11224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:29:55.411756 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:29:55.411767 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:29:55.478898 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:29:55.478918 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:29:58.007945 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:29:58.028590 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:29:58.028654 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:29:58.053263 1446402 cri.go:96] found id: ""
	I1222 00:29:58.053277 1446402 logs.go:282] 0 containers: []
	W1222 00:29:58.053284 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:29:58.053290 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:29:58.053349 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:29:58.078650 1446402 cri.go:96] found id: ""
	I1222 00:29:58.078664 1446402 logs.go:282] 0 containers: []
	W1222 00:29:58.078671 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:29:58.078676 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:29:58.078746 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:29:58.104284 1446402 cri.go:96] found id: ""
	I1222 00:29:58.104298 1446402 logs.go:282] 0 containers: []
	W1222 00:29:58.104305 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:29:58.104310 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:29:58.104372 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:29:58.133078 1446402 cri.go:96] found id: ""
	I1222 00:29:58.133103 1446402 logs.go:282] 0 containers: []
	W1222 00:29:58.133110 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:29:58.133116 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:29:58.133194 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:29:58.160079 1446402 cri.go:96] found id: ""
	I1222 00:29:58.160092 1446402 logs.go:282] 0 containers: []
	W1222 00:29:58.160100 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:29:58.160105 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:29:58.160209 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:29:58.184050 1446402 cri.go:96] found id: ""
	I1222 00:29:58.184070 1446402 logs.go:282] 0 containers: []
	W1222 00:29:58.184091 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:29:58.184098 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:29:58.184161 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:29:58.207826 1446402 cri.go:96] found id: ""
	I1222 00:29:58.207840 1446402 logs.go:282] 0 containers: []
	W1222 00:29:58.207847 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:29:58.207854 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:29:58.207864 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:29:58.275859 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:29:58.275886 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:29:58.308307 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:29:58.308324 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:29:58.365952 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:29:58.365971 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:29:58.381771 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:29:58.381788 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:29:58.449730 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:29:58.441326   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:58.442009   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:58.443754   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:58.444437   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:58.446028   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:29:58.441326   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:58.442009   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:58.443754   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:58.444437   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:29:58.446028   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:30:00.951841 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:30:00.968627 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:30:00.968704 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:30:01.017629 1446402 cri.go:96] found id: ""
	I1222 00:30:01.017648 1446402 logs.go:282] 0 containers: []
	W1222 00:30:01.017657 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:30:01.017665 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:30:01.017745 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:30:01.052801 1446402 cri.go:96] found id: ""
	I1222 00:30:01.052819 1446402 logs.go:282] 0 containers: []
	W1222 00:30:01.052829 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:30:01.052837 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:30:01.052908 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:30:01.090908 1446402 cri.go:96] found id: ""
	I1222 00:30:01.090924 1446402 logs.go:282] 0 containers: []
	W1222 00:30:01.090942 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:30:01.090949 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:30:01.091024 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:30:01.135566 1446402 cri.go:96] found id: ""
	I1222 00:30:01.135584 1446402 logs.go:282] 0 containers: []
	W1222 00:30:01.135592 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:30:01.135599 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:30:01.135681 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:30:01.183704 1446402 cri.go:96] found id: ""
	I1222 00:30:01.183720 1446402 logs.go:282] 0 containers: []
	W1222 00:30:01.183728 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:30:01.183734 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:30:01.183803 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:30:01.237284 1446402 cri.go:96] found id: ""
	I1222 00:30:01.237300 1446402 logs.go:282] 0 containers: []
	W1222 00:30:01.237315 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:30:01.237321 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:30:01.237397 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:30:01.274702 1446402 cri.go:96] found id: ""
	I1222 00:30:01.274719 1446402 logs.go:282] 0 containers: []
	W1222 00:30:01.274727 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:30:01.274735 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:30:01.274746 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:30:01.337817 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:30:01.337838 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:30:01.357916 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:30:01.357936 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:30:01.439644 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:30:01.426011   11434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:01.426767   11434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:01.428959   11434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:01.433129   11434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:01.433823   11434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:30:01.426011   11434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:01.426767   11434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:01.428959   11434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:01.433129   11434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:01.433823   11434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:30:01.439657 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:30:01.439672 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:30:01.506150 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:30:01.506173 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:30:04.047348 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:30:04.057922 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:30:04.057990 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:30:04.083599 1446402 cri.go:96] found id: ""
	I1222 00:30:04.083613 1446402 logs.go:282] 0 containers: []
	W1222 00:30:04.083620 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:30:04.083625 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:30:04.083697 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:30:04.109159 1446402 cri.go:96] found id: ""
	I1222 00:30:04.109174 1446402 logs.go:282] 0 containers: []
	W1222 00:30:04.109181 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:30:04.109186 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:30:04.109245 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:30:04.138314 1446402 cri.go:96] found id: ""
	I1222 00:30:04.138329 1446402 logs.go:282] 0 containers: []
	W1222 00:30:04.138336 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:30:04.138344 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:30:04.138405 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:30:04.164036 1446402 cri.go:96] found id: ""
	I1222 00:30:04.164051 1446402 logs.go:282] 0 containers: []
	W1222 00:30:04.164058 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:30:04.164078 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:30:04.164143 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:30:04.189566 1446402 cri.go:96] found id: ""
	I1222 00:30:04.189581 1446402 logs.go:282] 0 containers: []
	W1222 00:30:04.189588 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:30:04.189593 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:30:04.189657 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:30:04.214647 1446402 cri.go:96] found id: ""
	I1222 00:30:04.214662 1446402 logs.go:282] 0 containers: []
	W1222 00:30:04.214669 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:30:04.214675 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:30:04.214746 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:30:04.243657 1446402 cri.go:96] found id: ""
	I1222 00:30:04.243672 1446402 logs.go:282] 0 containers: []
	W1222 00:30:04.243680 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:30:04.243687 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:30:04.243700 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:30:04.312395 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:30:04.312414 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:30:04.342163 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:30:04.342181 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:30:04.399936 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:30:04.399958 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:30:04.416847 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:30:04.416863 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:30:04.482794 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:30:04.473925   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:04.474511   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:04.476052   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:04.476651   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:04.478320   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:30:04.473925   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:04.474511   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:04.476052   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:04.476651   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:04.478320   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:30:06.983066 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:30:06.993652 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:30:06.993715 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:30:07.023165 1446402 cri.go:96] found id: ""
	I1222 00:30:07.023180 1446402 logs.go:282] 0 containers: []
	W1222 00:30:07.023187 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:30:07.023192 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:30:07.023255 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:30:07.049538 1446402 cri.go:96] found id: ""
	I1222 00:30:07.049552 1446402 logs.go:282] 0 containers: []
	W1222 00:30:07.049560 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:30:07.049565 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:30:07.049629 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:30:07.075257 1446402 cri.go:96] found id: ""
	I1222 00:30:07.075277 1446402 logs.go:282] 0 containers: []
	W1222 00:30:07.075284 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:30:07.075289 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:30:07.075351 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:30:07.101441 1446402 cri.go:96] found id: ""
	I1222 00:30:07.101456 1446402 logs.go:282] 0 containers: []
	W1222 00:30:07.101463 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:30:07.101469 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:30:07.101532 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:30:07.128366 1446402 cri.go:96] found id: ""
	I1222 00:30:07.128380 1446402 logs.go:282] 0 containers: []
	W1222 00:30:07.128392 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:30:07.128398 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:30:07.128460 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:30:07.152988 1446402 cri.go:96] found id: ""
	I1222 00:30:07.153005 1446402 logs.go:282] 0 containers: []
	W1222 00:30:07.153013 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:30:07.153019 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:30:07.153079 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:30:07.178387 1446402 cri.go:96] found id: ""
	I1222 00:30:07.178401 1446402 logs.go:282] 0 containers: []
	W1222 00:30:07.178409 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:30:07.178428 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:30:07.178440 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:30:07.194549 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:30:07.194566 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:30:07.271952 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:30:07.261354   11636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:07.262317   11636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:07.264582   11636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:07.265158   11636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:07.267664   11636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:30:07.261354   11636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:07.262317   11636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:07.264582   11636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:07.265158   11636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:07.267664   11636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:30:07.271961 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:30:07.271973 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:30:07.346114 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:30:07.346134 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:30:07.373577 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:30:07.373593 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:30:09.930306 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:30:09.940949 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:30:09.941017 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:30:09.968763 1446402 cri.go:96] found id: ""
	I1222 00:30:09.968777 1446402 logs.go:282] 0 containers: []
	W1222 00:30:09.968784 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:30:09.968789 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:30:09.968848 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:30:09.992991 1446402 cri.go:96] found id: ""
	I1222 00:30:09.993006 1446402 logs.go:282] 0 containers: []
	W1222 00:30:09.993013 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:30:09.993018 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:30:09.993082 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:30:10.029788 1446402 cri.go:96] found id: ""
	I1222 00:30:10.029804 1446402 logs.go:282] 0 containers: []
	W1222 00:30:10.029811 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:30:10.029817 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:30:10.029886 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:30:10.067395 1446402 cri.go:96] found id: ""
	I1222 00:30:10.067410 1446402 logs.go:282] 0 containers: []
	W1222 00:30:10.067416 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:30:10.067422 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:30:10.067499 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:30:10.095007 1446402 cri.go:96] found id: ""
	I1222 00:30:10.095022 1446402 logs.go:282] 0 containers: []
	W1222 00:30:10.095030 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:30:10.095036 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:30:10.095101 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:30:10.123474 1446402 cri.go:96] found id: ""
	I1222 00:30:10.123495 1446402 logs.go:282] 0 containers: []
	W1222 00:30:10.123503 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:30:10.123509 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:30:10.123573 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:30:10.153420 1446402 cri.go:96] found id: ""
	I1222 00:30:10.153435 1446402 logs.go:282] 0 containers: []
	W1222 00:30:10.153441 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:30:10.153448 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:30:10.153459 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:30:10.210172 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:30:10.210193 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:30:10.226706 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:30:10.226725 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:30:10.315292 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:30:10.305028   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:10.306157   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:10.307886   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:10.308616   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:10.310308   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:30:10.305028   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:10.306157   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:10.307886   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:10.308616   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:10.310308   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:30:10.315303 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:30:10.315313 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:30:10.383703 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:30:10.383725 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:30:12.913638 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:30:12.925302 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:30:12.925369 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:30:12.950905 1446402 cri.go:96] found id: ""
	I1222 00:30:12.950919 1446402 logs.go:282] 0 containers: []
	W1222 00:30:12.950930 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:30:12.950935 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:30:12.950996 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:30:12.975557 1446402 cri.go:96] found id: ""
	I1222 00:30:12.975587 1446402 logs.go:282] 0 containers: []
	W1222 00:30:12.975596 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:30:12.975609 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:30:12.975679 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:30:13.000143 1446402 cri.go:96] found id: ""
	I1222 00:30:13.000157 1446402 logs.go:282] 0 containers: []
	W1222 00:30:13.000165 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:30:13.000171 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:30:13.000234 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:30:13.026672 1446402 cri.go:96] found id: ""
	I1222 00:30:13.026694 1446402 logs.go:282] 0 containers: []
	W1222 00:30:13.026702 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:30:13.026709 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:30:13.026773 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:30:13.055830 1446402 cri.go:96] found id: ""
	I1222 00:30:13.055846 1446402 logs.go:282] 0 containers: []
	W1222 00:30:13.055854 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:30:13.055859 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:30:13.055923 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:30:13.082359 1446402 cri.go:96] found id: ""
	I1222 00:30:13.082374 1446402 logs.go:282] 0 containers: []
	W1222 00:30:13.082382 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:30:13.082387 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:30:13.082449 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:30:13.108828 1446402 cri.go:96] found id: ""
	I1222 00:30:13.108842 1446402 logs.go:282] 0 containers: []
	W1222 00:30:13.108850 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:30:13.108858 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:30:13.108869 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:30:13.165350 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:30:13.165373 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:30:13.181480 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:30:13.181497 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:30:13.246107 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:30:13.236679   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:13.237365   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:13.239274   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:13.239720   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:13.241214   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:30:13.236679   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:13.237365   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:13.239274   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:13.239720   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:13.241214   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:30:13.246118 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:30:13.246128 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:30:13.320470 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:30:13.320490 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:30:15.851791 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:30:15.862330 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:30:15.862391 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:30:15.890336 1446402 cri.go:96] found id: ""
	I1222 00:30:15.890350 1446402 logs.go:282] 0 containers: []
	W1222 00:30:15.890358 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:30:15.890364 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:30:15.890428 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:30:15.917647 1446402 cri.go:96] found id: ""
	I1222 00:30:15.917662 1446402 logs.go:282] 0 containers: []
	W1222 00:30:15.917670 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:30:15.917675 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:30:15.917737 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:30:15.948052 1446402 cri.go:96] found id: ""
	I1222 00:30:15.948074 1446402 logs.go:282] 0 containers: []
	W1222 00:30:15.948083 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:30:15.948089 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:30:15.948155 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:30:15.973080 1446402 cri.go:96] found id: ""
	I1222 00:30:15.973094 1446402 logs.go:282] 0 containers: []
	W1222 00:30:15.973101 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:30:15.973107 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:30:15.973167 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:30:15.998935 1446402 cri.go:96] found id: ""
	I1222 00:30:15.998950 1446402 logs.go:282] 0 containers: []
	W1222 00:30:15.998957 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:30:15.998962 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:30:15.999025 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:30:16.027611 1446402 cri.go:96] found id: ""
	I1222 00:30:16.027628 1446402 logs.go:282] 0 containers: []
	W1222 00:30:16.027638 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:30:16.027644 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:30:16.027727 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:30:16.053780 1446402 cri.go:96] found id: ""
	I1222 00:30:16.053794 1446402 logs.go:282] 0 containers: []
	W1222 00:30:16.053802 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:30:16.053809 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:30:16.053823 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:30:16.124007 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:30:16.115168   11946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:16.115642   11946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:16.117413   11946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:16.118187   11946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:16.119870   11946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:30:16.115168   11946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:16.115642   11946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:16.117413   11946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:16.118187   11946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:16.119870   11946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:30:16.124030 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:30:16.124042 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:30:16.186716 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:30:16.186736 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:30:16.216494 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:30:16.216511 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:30:16.279107 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:30:16.279127 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:30:18.798677 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:30:18.809493 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:30:18.809564 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:30:18.835308 1446402 cri.go:96] found id: ""
	I1222 00:30:18.835323 1446402 logs.go:282] 0 containers: []
	W1222 00:30:18.835337 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:30:18.835344 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:30:18.835408 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:30:18.861968 1446402 cri.go:96] found id: ""
	I1222 00:30:18.861982 1446402 logs.go:282] 0 containers: []
	W1222 00:30:18.861989 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:30:18.861995 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:30:18.862052 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:30:18.887230 1446402 cri.go:96] found id: ""
	I1222 00:30:18.887243 1446402 logs.go:282] 0 containers: []
	W1222 00:30:18.887250 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:30:18.887256 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:30:18.887313 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:30:18.912928 1446402 cri.go:96] found id: ""
	I1222 00:30:18.912942 1446402 logs.go:282] 0 containers: []
	W1222 00:30:18.912949 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:30:18.912954 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:30:18.913016 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:30:18.939487 1446402 cri.go:96] found id: ""
	I1222 00:30:18.939501 1446402 logs.go:282] 0 containers: []
	W1222 00:30:18.939509 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:30:18.939514 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:30:18.939578 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:30:18.973342 1446402 cri.go:96] found id: ""
	I1222 00:30:18.973356 1446402 logs.go:282] 0 containers: []
	W1222 00:30:18.973364 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:30:18.973369 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:30:18.973428 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:30:18.997889 1446402 cri.go:96] found id: ""
	I1222 00:30:18.997913 1446402 logs.go:282] 0 containers: []
	W1222 00:30:18.997920 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:30:18.997927 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:30:18.997938 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:30:19.055572 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:30:19.055591 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:30:19.072427 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:30:19.072443 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:30:19.139616 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:30:19.130540   12061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:19.131330   12061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:19.133029   12061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:19.133650   12061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:19.135400   12061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:30:19.130540   12061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:19.131330   12061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:19.133029   12061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:19.133650   12061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:19.135400   12061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:30:19.139628 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:30:19.139638 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:30:19.202678 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:30:19.202697 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:30:21.731757 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:30:21.742262 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:30:21.742322 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:30:21.768714 1446402 cri.go:96] found id: ""
	I1222 00:30:21.768728 1446402 logs.go:282] 0 containers: []
	W1222 00:30:21.768736 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:30:21.768741 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:30:21.768804 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:30:21.799253 1446402 cri.go:96] found id: ""
	I1222 00:30:21.799269 1446402 logs.go:282] 0 containers: []
	W1222 00:30:21.799276 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:30:21.799283 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:30:21.799344 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:30:21.824941 1446402 cri.go:96] found id: ""
	I1222 00:30:21.824963 1446402 logs.go:282] 0 containers: []
	W1222 00:30:21.824970 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:30:21.824975 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:30:21.825035 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:30:21.850741 1446402 cri.go:96] found id: ""
	I1222 00:30:21.850755 1446402 logs.go:282] 0 containers: []
	W1222 00:30:21.850762 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:30:21.850767 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:30:21.850829 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:30:21.876572 1446402 cri.go:96] found id: ""
	I1222 00:30:21.876587 1446402 logs.go:282] 0 containers: []
	W1222 00:30:21.876595 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:30:21.876600 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:30:21.876660 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:30:21.902799 1446402 cri.go:96] found id: ""
	I1222 00:30:21.902814 1446402 logs.go:282] 0 containers: []
	W1222 00:30:21.902821 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:30:21.902827 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:30:21.902888 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:30:21.928559 1446402 cri.go:96] found id: ""
	I1222 00:30:21.928573 1446402 logs.go:282] 0 containers: []
	W1222 00:30:21.928580 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:30:21.928587 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:30:21.928597 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:30:21.984144 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:30:21.984164 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:30:22.000384 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:30:22.000402 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:30:22.073778 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:30:22.064391   12166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:22.065145   12166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:22.066873   12166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:22.067425   12166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:22.069099   12166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:30:22.064391   12166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:22.065145   12166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:22.066873   12166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:22.067425   12166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:22.069099   12166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:30:22.073791 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:30:22.073804 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:30:22.146346 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:30:22.146377 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:30:24.676106 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:30:24.687741 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:30:24.687862 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:30:24.714182 1446402 cri.go:96] found id: ""
	I1222 00:30:24.714204 1446402 logs.go:282] 0 containers: []
	W1222 00:30:24.714212 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:30:24.714217 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:30:24.714281 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:30:24.740930 1446402 cri.go:96] found id: ""
	I1222 00:30:24.740944 1446402 logs.go:282] 0 containers: []
	W1222 00:30:24.740951 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:30:24.740957 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:30:24.741018 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:30:24.767599 1446402 cri.go:96] found id: ""
	I1222 00:30:24.767613 1446402 logs.go:282] 0 containers: []
	W1222 00:30:24.767621 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:30:24.767626 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:30:24.767685 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:30:24.792739 1446402 cri.go:96] found id: ""
	I1222 00:30:24.792753 1446402 logs.go:282] 0 containers: []
	W1222 00:30:24.792760 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:30:24.792766 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:30:24.792827 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:30:24.816926 1446402 cri.go:96] found id: ""
	I1222 00:30:24.816940 1446402 logs.go:282] 0 containers: []
	W1222 00:30:24.816948 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:30:24.816953 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:30:24.817012 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:30:24.842765 1446402 cri.go:96] found id: ""
	I1222 00:30:24.842780 1446402 logs.go:282] 0 containers: []
	W1222 00:30:24.842788 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:30:24.842794 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:30:24.842872 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:30:24.869078 1446402 cri.go:96] found id: ""
	I1222 00:30:24.869092 1446402 logs.go:282] 0 containers: []
	W1222 00:30:24.869099 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:30:24.869108 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:30:24.869119 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:30:24.903296 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:30:24.903312 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:30:24.961056 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:30:24.961075 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:30:24.976812 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:30:24.976828 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:30:25.069840 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:30:25.045522   12282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:25.046327   12282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:25.048398   12282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:25.049159   12282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:25.062238   12282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:30:25.045522   12282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:25.046327   12282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:25.048398   12282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:25.049159   12282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:25.062238   12282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:30:25.069853 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:30:25.069866 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:30:27.636563 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:30:27.647100 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:30:27.647166 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:30:27.672723 1446402 cri.go:96] found id: ""
	I1222 00:30:27.672737 1446402 logs.go:282] 0 containers: []
	W1222 00:30:27.672745 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:30:27.672750 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:30:27.672813 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:30:27.702441 1446402 cri.go:96] found id: ""
	I1222 00:30:27.702455 1446402 logs.go:282] 0 containers: []
	W1222 00:30:27.702462 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:30:27.702468 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:30:27.702530 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:30:27.731422 1446402 cri.go:96] found id: ""
	I1222 00:30:27.731436 1446402 logs.go:282] 0 containers: []
	W1222 00:30:27.731443 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:30:27.731448 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:30:27.731509 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:30:27.756265 1446402 cri.go:96] found id: ""
	I1222 00:30:27.756279 1446402 logs.go:282] 0 containers: []
	W1222 00:30:27.756287 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:30:27.756292 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:30:27.756354 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:30:27.779774 1446402 cri.go:96] found id: ""
	I1222 00:30:27.779791 1446402 logs.go:282] 0 containers: []
	W1222 00:30:27.779798 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:30:27.779804 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:30:27.779867 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:30:27.805305 1446402 cri.go:96] found id: ""
	I1222 00:30:27.805320 1446402 logs.go:282] 0 containers: []
	W1222 00:30:27.805327 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:30:27.805333 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:30:27.805396 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:30:27.835772 1446402 cri.go:96] found id: ""
	I1222 00:30:27.835786 1446402 logs.go:282] 0 containers: []
	W1222 00:30:27.835794 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:30:27.835802 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:30:27.835813 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:30:27.851527 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:30:27.851543 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:30:27.917867 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:30:27.909398   12374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:27.909941   12374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:27.911438   12374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:27.911805   12374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:27.913341   12374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:30:27.909398   12374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:27.909941   12374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:27.911438   12374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:27.911805   12374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:27.913341   12374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:30:27.917877 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:30:27.917889 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:30:27.981255 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:30:27.981274 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:30:28.012714 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:30:28.012732 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:30:30.570668 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:30:30.581032 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:30:30.581096 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:30:30.605788 1446402 cri.go:96] found id: ""
	I1222 00:30:30.605801 1446402 logs.go:282] 0 containers: []
	W1222 00:30:30.605809 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:30:30.605816 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:30:30.605878 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:30:30.630263 1446402 cri.go:96] found id: ""
	I1222 00:30:30.630277 1446402 logs.go:282] 0 containers: []
	W1222 00:30:30.630284 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:30:30.630289 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:30:30.630348 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:30:30.655578 1446402 cri.go:96] found id: ""
	I1222 00:30:30.655593 1446402 logs.go:282] 0 containers: []
	W1222 00:30:30.655600 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:30:30.655608 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:30:30.655668 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:30:30.680304 1446402 cri.go:96] found id: ""
	I1222 00:30:30.680319 1446402 logs.go:282] 0 containers: []
	W1222 00:30:30.680326 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:30:30.680332 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:30:30.680390 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:30:30.706799 1446402 cri.go:96] found id: ""
	I1222 00:30:30.706812 1446402 logs.go:282] 0 containers: []
	W1222 00:30:30.706819 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:30:30.706826 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:30:30.706888 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:30:30.732009 1446402 cri.go:96] found id: ""
	I1222 00:30:30.732023 1446402 logs.go:282] 0 containers: []
	W1222 00:30:30.732030 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:30:30.732036 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:30:30.732145 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:30:30.758260 1446402 cri.go:96] found id: ""
	I1222 00:30:30.758274 1446402 logs.go:282] 0 containers: []
	W1222 00:30:30.758282 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:30:30.758289 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:30:30.758302 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:30:30.773937 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:30:30.773955 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:30:30.836710 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:30:30.828393   12483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:30.829211   12483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:30.830850   12483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:30.831199   12483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:30.832774   12483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:30:30.828393   12483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:30.829211   12483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:30.830850   12483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:30.831199   12483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:30.832774   12483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:30:30.836720 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:30:30.836734 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:30:30.898609 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:30:30.898629 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:30:30.926987 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:30:30.927002 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:30:33.488514 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:30:33.500859 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:30:33.500936 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:30:33.534647 1446402 cri.go:96] found id: ""
	I1222 00:30:33.534662 1446402 logs.go:282] 0 containers: []
	W1222 00:30:33.534669 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:30:33.534675 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:30:33.534740 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:30:33.567528 1446402 cri.go:96] found id: ""
	I1222 00:30:33.567542 1446402 logs.go:282] 0 containers: []
	W1222 00:30:33.567550 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:30:33.567556 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:30:33.567619 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:30:33.592756 1446402 cri.go:96] found id: ""
	I1222 00:30:33.592770 1446402 logs.go:282] 0 containers: []
	W1222 00:30:33.592777 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:30:33.592783 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:30:33.592843 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:30:33.618141 1446402 cri.go:96] found id: ""
	I1222 00:30:33.618155 1446402 logs.go:282] 0 containers: []
	W1222 00:30:33.618162 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:30:33.618169 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:30:33.618229 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:30:33.643676 1446402 cri.go:96] found id: ""
	I1222 00:30:33.643690 1446402 logs.go:282] 0 containers: []
	W1222 00:30:33.643697 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:30:33.643702 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:30:33.643766 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:30:33.675007 1446402 cri.go:96] found id: ""
	I1222 00:30:33.675022 1446402 logs.go:282] 0 containers: []
	W1222 00:30:33.675029 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:30:33.675035 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:30:33.675096 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:30:33.701088 1446402 cri.go:96] found id: ""
	I1222 00:30:33.701104 1446402 logs.go:282] 0 containers: []
	W1222 00:30:33.701112 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:30:33.701119 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:30:33.701130 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:30:33.757879 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:30:33.757898 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:30:33.773857 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:30:33.773873 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:30:33.838724 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:30:33.830022   12590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:33.830661   12590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:33.832486   12590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:33.833021   12590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:33.834672   12590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:30:33.830022   12590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:33.830661   12590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:33.832486   12590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:33.833021   12590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:33.834672   12590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:30:33.838735 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:30:33.838745 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:30:33.901316 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:30:33.901336 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:30:36.433582 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:30:36.443819 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:30:36.443881 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:30:36.467506 1446402 cri.go:96] found id: ""
	I1222 00:30:36.467521 1446402 logs.go:282] 0 containers: []
	W1222 00:30:36.467528 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:30:36.467534 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:30:36.467596 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:30:36.502511 1446402 cri.go:96] found id: ""
	I1222 00:30:36.502525 1446402 logs.go:282] 0 containers: []
	W1222 00:30:36.502532 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:30:36.502538 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:30:36.502596 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:30:36.528768 1446402 cri.go:96] found id: ""
	I1222 00:30:36.528782 1446402 logs.go:282] 0 containers: []
	W1222 00:30:36.528789 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:30:36.528795 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:30:36.528856 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:30:36.563520 1446402 cri.go:96] found id: ""
	I1222 00:30:36.563534 1446402 logs.go:282] 0 containers: []
	W1222 00:30:36.563552 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:30:36.563558 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:30:36.563625 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:30:36.587776 1446402 cri.go:96] found id: ""
	I1222 00:30:36.587791 1446402 logs.go:282] 0 containers: []
	W1222 00:30:36.587798 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:30:36.587803 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:30:36.587870 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:30:36.613760 1446402 cri.go:96] found id: ""
	I1222 00:30:36.613774 1446402 logs.go:282] 0 containers: []
	W1222 00:30:36.613781 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:30:36.613786 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:30:36.613846 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:30:36.638515 1446402 cri.go:96] found id: ""
	I1222 00:30:36.638529 1446402 logs.go:282] 0 containers: []
	W1222 00:30:36.638536 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:30:36.638544 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:30:36.638554 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:30:36.697219 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:30:36.697239 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:30:36.713436 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:30:36.713452 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:30:36.780368 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:30:36.772229   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:36.773020   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:36.774640   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:36.774973   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:36.776479   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:30:36.772229   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:36.773020   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:36.774640   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:36.774973   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:36.776479   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:30:36.780381 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:30:36.780393 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:30:36.842888 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:30:36.842908 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:30:39.372135 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:30:39.382719 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:30:39.382781 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:30:39.408981 1446402 cri.go:96] found id: ""
	I1222 00:30:39.408994 1446402 logs.go:282] 0 containers: []
	W1222 00:30:39.409002 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:30:39.409007 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:30:39.409066 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:30:39.442559 1446402 cri.go:96] found id: ""
	I1222 00:30:39.442573 1446402 logs.go:282] 0 containers: []
	W1222 00:30:39.442581 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:30:39.442586 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:30:39.442643 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:30:39.468577 1446402 cri.go:96] found id: ""
	I1222 00:30:39.468591 1446402 logs.go:282] 0 containers: []
	W1222 00:30:39.468598 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:30:39.468603 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:30:39.468660 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:30:39.510316 1446402 cri.go:96] found id: ""
	I1222 00:30:39.510331 1446402 logs.go:282] 0 containers: []
	W1222 00:30:39.510339 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:30:39.510345 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:30:39.510407 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:30:39.540511 1446402 cri.go:96] found id: ""
	I1222 00:30:39.540526 1446402 logs.go:282] 0 containers: []
	W1222 00:30:39.540538 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:30:39.540544 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:30:39.540607 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:30:39.567225 1446402 cri.go:96] found id: ""
	I1222 00:30:39.567239 1446402 logs.go:282] 0 containers: []
	W1222 00:30:39.567246 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:30:39.567251 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:30:39.567313 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:30:39.592091 1446402 cri.go:96] found id: ""
	I1222 00:30:39.592105 1446402 logs.go:282] 0 containers: []
	W1222 00:30:39.592112 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:30:39.592119 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:30:39.592130 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:30:39.622343 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:30:39.622362 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:30:39.679425 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:30:39.679444 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:30:39.696213 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:30:39.696230 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:30:39.769659 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:30:39.760868   12812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:39.761671   12812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:39.763320   12812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:39.763916   12812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:39.765513   12812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:30:39.760868   12812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:39.761671   12812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:39.763320   12812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:39.763916   12812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:39.765513   12812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:30:39.769670 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:30:39.769680 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:30:42.336173 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:30:42.346558 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:30:42.346621 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:30:42.370787 1446402 cri.go:96] found id: ""
	I1222 00:30:42.370802 1446402 logs.go:282] 0 containers: []
	W1222 00:30:42.370810 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:30:42.370816 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:30:42.370877 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:30:42.395960 1446402 cri.go:96] found id: ""
	I1222 00:30:42.395973 1446402 logs.go:282] 0 containers: []
	W1222 00:30:42.395980 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:30:42.395985 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:30:42.396044 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:30:42.421477 1446402 cri.go:96] found id: ""
	I1222 00:30:42.421491 1446402 logs.go:282] 0 containers: []
	W1222 00:30:42.421498 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:30:42.421504 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:30:42.421564 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:30:42.446555 1446402 cri.go:96] found id: ""
	I1222 00:30:42.446569 1446402 logs.go:282] 0 containers: []
	W1222 00:30:42.446577 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:30:42.446582 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:30:42.446642 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:30:42.472081 1446402 cri.go:96] found id: ""
	I1222 00:30:42.472098 1446402 logs.go:282] 0 containers: []
	W1222 00:30:42.472105 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:30:42.472110 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:30:42.472169 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:30:42.511362 1446402 cri.go:96] found id: ""
	I1222 00:30:42.511375 1446402 logs.go:282] 0 containers: []
	W1222 00:30:42.511382 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:30:42.511388 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:30:42.511447 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:30:42.547512 1446402 cri.go:96] found id: ""
	I1222 00:30:42.547527 1446402 logs.go:282] 0 containers: []
	W1222 00:30:42.547533 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:30:42.547541 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:30:42.547551 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:30:42.615776 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:30:42.615799 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:30:42.646130 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:30:42.646146 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:30:42.705658 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:30:42.705677 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:30:42.721590 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:30:42.721610 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:30:42.787813 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:30:42.778324   12913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:42.779074   12913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:42.780775   12913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:42.781111   12913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:42.782752   12913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:30:42.778324   12913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:42.779074   12913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:42.780775   12913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:42.781111   12913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:42.782752   12913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:30:45.288531 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:30:45.303331 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:30:45.303401 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:30:45.338450 1446402 cri.go:96] found id: ""
	I1222 00:30:45.338484 1446402 logs.go:282] 0 containers: []
	W1222 00:30:45.338492 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:30:45.338499 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:30:45.338571 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:30:45.365473 1446402 cri.go:96] found id: ""
	I1222 00:30:45.365487 1446402 logs.go:282] 0 containers: []
	W1222 00:30:45.365494 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:30:45.365500 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:30:45.365561 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:30:45.390271 1446402 cri.go:96] found id: ""
	I1222 00:30:45.390285 1446402 logs.go:282] 0 containers: []
	W1222 00:30:45.390292 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:30:45.390298 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:30:45.390357 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:30:45.414377 1446402 cri.go:96] found id: ""
	I1222 00:30:45.414391 1446402 logs.go:282] 0 containers: []
	W1222 00:30:45.414398 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:30:45.414405 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:30:45.414465 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:30:45.443708 1446402 cri.go:96] found id: ""
	I1222 00:30:45.443722 1446402 logs.go:282] 0 containers: []
	W1222 00:30:45.443729 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:30:45.443735 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:30:45.443800 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:30:45.469111 1446402 cri.go:96] found id: ""
	I1222 00:30:45.469126 1446402 logs.go:282] 0 containers: []
	W1222 00:30:45.469133 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:30:45.469138 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:30:45.469199 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:30:45.506648 1446402 cri.go:96] found id: ""
	I1222 00:30:45.506662 1446402 logs.go:282] 0 containers: []
	W1222 00:30:45.506670 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:30:45.506678 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:30:45.506688 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:30:45.570224 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:30:45.570244 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:30:45.587665 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:30:45.587682 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:30:45.658642 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:30:45.649729   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:45.650417   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:45.652145   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:45.652819   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:45.654615   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:30:45.649729   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:45.650417   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:45.652145   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:45.652819   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:45.654615   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:30:45.658668 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:30:45.658680 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:30:45.726278 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:30:45.726296 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:30:48.258377 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:30:48.269041 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:30:48.269106 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:30:48.296090 1446402 cri.go:96] found id: ""
	I1222 00:30:48.296110 1446402 logs.go:282] 0 containers: []
	W1222 00:30:48.296118 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:30:48.296124 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:30:48.296189 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:30:48.324810 1446402 cri.go:96] found id: ""
	I1222 00:30:48.324824 1446402 logs.go:282] 0 containers: []
	W1222 00:30:48.324838 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:30:48.324844 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:30:48.324907 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:30:48.355386 1446402 cri.go:96] found id: ""
	I1222 00:30:48.355401 1446402 logs.go:282] 0 containers: []
	W1222 00:30:48.355408 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:30:48.355413 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:30:48.355478 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:30:48.382715 1446402 cri.go:96] found id: ""
	I1222 00:30:48.382738 1446402 logs.go:282] 0 containers: []
	W1222 00:30:48.382746 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:30:48.382752 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:30:48.382829 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:30:48.408554 1446402 cri.go:96] found id: ""
	I1222 00:30:48.408567 1446402 logs.go:282] 0 containers: []
	W1222 00:30:48.408574 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:30:48.408580 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:30:48.408643 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:30:48.434270 1446402 cri.go:96] found id: ""
	I1222 00:30:48.434293 1446402 logs.go:282] 0 containers: []
	W1222 00:30:48.434300 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:30:48.434306 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:30:48.434374 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:30:48.459881 1446402 cri.go:96] found id: ""
	I1222 00:30:48.459895 1446402 logs.go:282] 0 containers: []
	W1222 00:30:48.459903 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:30:48.459911 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:30:48.459921 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:30:48.517466 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:30:48.517484 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:30:48.537053 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:30:48.537070 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:30:48.604854 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:30:48.595608   13115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:48.596514   13115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:48.598305   13115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:48.599000   13115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:48.600577   13115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:30:48.595608   13115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:48.596514   13115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:48.598305   13115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:48.599000   13115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:48.600577   13115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:30:48.604864 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:30:48.604874 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:30:48.671361 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:30:48.671387 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:30:51.200853 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:30:51.211776 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:30:51.211839 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:30:51.238170 1446402 cri.go:96] found id: ""
	I1222 00:30:51.238186 1446402 logs.go:282] 0 containers: []
	W1222 00:30:51.238194 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:30:51.238199 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:30:51.238268 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:30:51.269105 1446402 cri.go:96] found id: ""
	I1222 00:30:51.269134 1446402 logs.go:282] 0 containers: []
	W1222 00:30:51.269142 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:30:51.269148 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:30:51.269219 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:30:51.293434 1446402 cri.go:96] found id: ""
	I1222 00:30:51.293457 1446402 logs.go:282] 0 containers: []
	W1222 00:30:51.293464 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:30:51.293470 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:30:51.293541 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:30:51.319040 1446402 cri.go:96] found id: ""
	I1222 00:30:51.319055 1446402 logs.go:282] 0 containers: []
	W1222 00:30:51.319062 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:30:51.319068 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:30:51.319130 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:30:51.348957 1446402 cri.go:96] found id: ""
	I1222 00:30:51.348974 1446402 logs.go:282] 0 containers: []
	W1222 00:30:51.348982 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:30:51.348987 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:30:51.349051 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:30:51.374220 1446402 cri.go:96] found id: ""
	I1222 00:30:51.374234 1446402 logs.go:282] 0 containers: []
	W1222 00:30:51.374242 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:30:51.374248 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:30:51.374308 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:30:51.399159 1446402 cri.go:96] found id: ""
	I1222 00:30:51.399173 1446402 logs.go:282] 0 containers: []
	W1222 00:30:51.399180 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:30:51.399188 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:30:51.399198 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:30:51.459029 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:30:51.459048 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:30:51.475298 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:30:51.475315 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:30:51.566963 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:30:51.557344   13215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:51.558248   13215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:51.560695   13215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:51.561017   13215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:51.562619   13215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:30:51.557344   13215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:51.558248   13215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:51.560695   13215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:51.561017   13215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:51.562619   13215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:30:51.566987 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:30:51.566997 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:30:51.629274 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:30:51.629295 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:30:54.157280 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:30:54.168037 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:30:54.168148 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:30:54.193307 1446402 cri.go:96] found id: ""
	I1222 00:30:54.193321 1446402 logs.go:282] 0 containers: []
	W1222 00:30:54.193328 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:30:54.193333 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:30:54.193396 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:30:54.219101 1446402 cri.go:96] found id: ""
	I1222 00:30:54.219115 1446402 logs.go:282] 0 containers: []
	W1222 00:30:54.219123 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:30:54.219128 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:30:54.219194 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:30:54.246374 1446402 cri.go:96] found id: ""
	I1222 00:30:54.246389 1446402 logs.go:282] 0 containers: []
	W1222 00:30:54.246396 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:30:54.246407 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:30:54.246465 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:30:54.271786 1446402 cri.go:96] found id: ""
	I1222 00:30:54.271801 1446402 logs.go:282] 0 containers: []
	W1222 00:30:54.271808 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:30:54.271813 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:30:54.271879 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:30:54.297101 1446402 cri.go:96] found id: ""
	I1222 00:30:54.297116 1446402 logs.go:282] 0 containers: []
	W1222 00:30:54.297123 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:30:54.297128 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:30:54.297187 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:30:54.321971 1446402 cri.go:96] found id: ""
	I1222 00:30:54.321984 1446402 logs.go:282] 0 containers: []
	W1222 00:30:54.321991 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:30:54.321997 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:30:54.322057 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:30:54.347313 1446402 cri.go:96] found id: ""
	I1222 00:30:54.347327 1446402 logs.go:282] 0 containers: []
	W1222 00:30:54.347334 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:30:54.347342 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:30:54.347353 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:30:54.403888 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:30:54.403909 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:30:54.419766 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:30:54.419782 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:30:54.484682 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:30:54.476870   13317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:54.477516   13317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:54.479088   13317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:54.479419   13317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:54.480904   13317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:30:54.476870   13317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:54.477516   13317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:54.479088   13317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:54.479419   13317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:54.480904   13317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:30:54.484693 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:30:54.484703 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:30:54.552360 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:30:54.552378 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:30:57.081711 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:30:57.092202 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:30:57.092266 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:30:57.117391 1446402 cri.go:96] found id: ""
	I1222 00:30:57.117405 1446402 logs.go:282] 0 containers: []
	W1222 00:30:57.117412 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:30:57.117419 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:30:57.117479 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:30:57.143247 1446402 cri.go:96] found id: ""
	I1222 00:30:57.143261 1446402 logs.go:282] 0 containers: []
	W1222 00:30:57.143269 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:30:57.143274 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:30:57.143336 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:30:57.167819 1446402 cri.go:96] found id: ""
	I1222 00:30:57.167833 1446402 logs.go:282] 0 containers: []
	W1222 00:30:57.167840 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:30:57.167845 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:30:57.167907 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:30:57.199021 1446402 cri.go:96] found id: ""
	I1222 00:30:57.199036 1446402 logs.go:282] 0 containers: []
	W1222 00:30:57.199043 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:30:57.199049 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:30:57.199108 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:30:57.222971 1446402 cri.go:96] found id: ""
	I1222 00:30:57.222986 1446402 logs.go:282] 0 containers: []
	W1222 00:30:57.222993 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:30:57.222999 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:30:57.223058 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:30:57.248778 1446402 cri.go:96] found id: ""
	I1222 00:30:57.248792 1446402 logs.go:282] 0 containers: []
	W1222 00:30:57.248800 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:30:57.248806 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:30:57.248865 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:30:57.274281 1446402 cri.go:96] found id: ""
	I1222 00:30:57.274294 1446402 logs.go:282] 0 containers: []
	W1222 00:30:57.274301 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:30:57.274309 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:30:57.274319 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:30:57.336861 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:30:57.336882 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:30:57.365636 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:30:57.365661 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:30:57.423967 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:30:57.423989 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:30:57.440127 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:30:57.440145 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:30:57.509798 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:30:57.500225   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:57.501062   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:57.503871   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:57.504255   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:57.505775   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:30:57.500225   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:57.501062   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:57.503871   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:57.504255   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:30:57.505775   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:31:00.010205 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:31:00.104650 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:31:00.104734 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:31:00.179982 1446402 cri.go:96] found id: ""
	I1222 00:31:00.180032 1446402 logs.go:282] 0 containers: []
	W1222 00:31:00.180041 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:31:00.180071 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:31:00.180239 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:31:00.284701 1446402 cri.go:96] found id: ""
	I1222 00:31:00.284717 1446402 logs.go:282] 0 containers: []
	W1222 00:31:00.284725 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:31:00.284731 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:31:00.284803 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:31:00.386635 1446402 cri.go:96] found id: ""
	I1222 00:31:00.386652 1446402 logs.go:282] 0 containers: []
	W1222 00:31:00.386659 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:31:00.386665 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:31:00.386735 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:31:00.427920 1446402 cri.go:96] found id: ""
	I1222 00:31:00.427944 1446402 logs.go:282] 0 containers: []
	W1222 00:31:00.427959 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:31:00.427966 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:31:00.428040 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:31:00.465116 1446402 cri.go:96] found id: ""
	I1222 00:31:00.465134 1446402 logs.go:282] 0 containers: []
	W1222 00:31:00.465144 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:31:00.465151 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:31:00.465232 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:31:00.499645 1446402 cri.go:96] found id: ""
	I1222 00:31:00.499660 1446402 logs.go:282] 0 containers: []
	W1222 00:31:00.499667 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:31:00.499673 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:31:00.499747 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:31:00.537565 1446402 cri.go:96] found id: ""
	I1222 00:31:00.537582 1446402 logs.go:282] 0 containers: []
	W1222 00:31:00.537595 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:31:00.537604 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:31:00.537615 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:31:00.575552 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:31:00.575567 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:31:00.633041 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:31:00.633063 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:31:00.649172 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:31:00.649187 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:31:00.724351 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:31:00.712002   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:00.712620   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:00.717220   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:00.718158   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:00.720021   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:31:00.712002   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:00.712620   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:00.717220   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:00.718158   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:00.720021   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:31:00.724361 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:31:00.724372 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:31:03.287306 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:31:03.298001 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:31:03.298072 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:31:03.324825 1446402 cri.go:96] found id: ""
	I1222 00:31:03.324840 1446402 logs.go:282] 0 containers: []
	W1222 00:31:03.324847 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:31:03.324859 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:31:03.324922 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:31:03.350917 1446402 cri.go:96] found id: ""
	I1222 00:31:03.350931 1446402 logs.go:282] 0 containers: []
	W1222 00:31:03.350939 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:31:03.350944 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:31:03.351006 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:31:03.379670 1446402 cri.go:96] found id: ""
	I1222 00:31:03.379685 1446402 logs.go:282] 0 containers: []
	W1222 00:31:03.379692 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:31:03.379697 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:31:03.379757 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:31:03.404478 1446402 cri.go:96] found id: ""
	I1222 00:31:03.404492 1446402 logs.go:282] 0 containers: []
	W1222 00:31:03.404499 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:31:03.404505 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:31:03.404566 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:31:03.433469 1446402 cri.go:96] found id: ""
	I1222 00:31:03.433483 1446402 logs.go:282] 0 containers: []
	W1222 00:31:03.433491 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:31:03.433496 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:31:03.433559 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:31:03.458710 1446402 cri.go:96] found id: ""
	I1222 00:31:03.458724 1446402 logs.go:282] 0 containers: []
	W1222 00:31:03.458731 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:31:03.458737 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:31:03.458798 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:31:03.489628 1446402 cri.go:96] found id: ""
	I1222 00:31:03.489641 1446402 logs.go:282] 0 containers: []
	W1222 00:31:03.489648 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:31:03.489656 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:31:03.489666 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:31:03.561791 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:31:03.561811 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:31:03.591660 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:31:03.591676 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:31:03.649546 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:31:03.649564 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:31:03.665699 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:31:03.665717 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:31:03.732939 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:31:03.723537   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:03.725016   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:03.725617   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:03.726715   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:03.727036   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:31:03.723537   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:03.725016   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:03.725617   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:03.726715   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:03.727036   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:31:06.234625 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:31:06.245401 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:31:06.245464 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:31:06.272079 1446402 cri.go:96] found id: ""
	I1222 00:31:06.272093 1446402 logs.go:282] 0 containers: []
	W1222 00:31:06.272100 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:31:06.272105 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:31:06.272166 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:31:06.297857 1446402 cri.go:96] found id: ""
	I1222 00:31:06.297871 1446402 logs.go:282] 0 containers: []
	W1222 00:31:06.297881 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:31:06.297886 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:31:06.297947 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:31:06.323563 1446402 cri.go:96] found id: ""
	I1222 00:31:06.323578 1446402 logs.go:282] 0 containers: []
	W1222 00:31:06.323585 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:31:06.323591 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:31:06.323654 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:31:06.352113 1446402 cri.go:96] found id: ""
	I1222 00:31:06.352128 1446402 logs.go:282] 0 containers: []
	W1222 00:31:06.352135 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:31:06.352140 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:31:06.352201 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:31:06.383883 1446402 cri.go:96] found id: ""
	I1222 00:31:06.383897 1446402 logs.go:282] 0 containers: []
	W1222 00:31:06.383906 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:31:06.383911 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:31:06.383980 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:31:06.410293 1446402 cri.go:96] found id: ""
	I1222 00:31:06.410307 1446402 logs.go:282] 0 containers: []
	W1222 00:31:06.410314 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:31:06.410319 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:31:06.410379 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:31:06.436428 1446402 cri.go:96] found id: ""
	I1222 00:31:06.436442 1446402 logs.go:282] 0 containers: []
	W1222 00:31:06.436449 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:31:06.436457 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:31:06.436467 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:31:06.493371 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:31:06.493391 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:31:06.511382 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:31:06.511400 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:31:06.582246 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:31:06.573607   13739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:06.574274   13739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:06.576164   13739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:06.576589   13739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:06.578115   13739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:31:06.573607   13739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:06.574274   13739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:06.576164   13739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:06.576589   13739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:06.578115   13739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:31:06.582256 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:31:06.582266 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:31:06.644909 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:31:06.644931 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:31:09.176116 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:31:09.186886 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:31:09.186957 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:31:09.212045 1446402 cri.go:96] found id: ""
	I1222 00:31:09.212081 1446402 logs.go:282] 0 containers: []
	W1222 00:31:09.212088 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:31:09.212094 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:31:09.212169 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:31:09.237345 1446402 cri.go:96] found id: ""
	I1222 00:31:09.237360 1446402 logs.go:282] 0 containers: []
	W1222 00:31:09.237367 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:31:09.237373 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:31:09.237435 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:31:09.262938 1446402 cri.go:96] found id: ""
	I1222 00:31:09.262953 1446402 logs.go:282] 0 containers: []
	W1222 00:31:09.262960 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:31:09.262966 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:31:09.263027 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:31:09.288202 1446402 cri.go:96] found id: ""
	I1222 00:31:09.288216 1446402 logs.go:282] 0 containers: []
	W1222 00:31:09.288223 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:31:09.288228 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:31:09.288291 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:31:09.313061 1446402 cri.go:96] found id: ""
	I1222 00:31:09.313075 1446402 logs.go:282] 0 containers: []
	W1222 00:31:09.313083 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:31:09.313088 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:31:09.313151 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:31:09.342668 1446402 cri.go:96] found id: ""
	I1222 00:31:09.342683 1446402 logs.go:282] 0 containers: []
	W1222 00:31:09.342691 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:31:09.342696 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:31:09.342760 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:31:09.370215 1446402 cri.go:96] found id: ""
	I1222 00:31:09.370239 1446402 logs.go:282] 0 containers: []
	W1222 00:31:09.370249 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:31:09.370258 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:31:09.370270 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:31:09.433823 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:31:09.425161   13835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:09.425608   13835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:09.427450   13835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:09.428117   13835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:09.429893   13835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:31:09.425161   13835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:09.425608   13835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:09.427450   13835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:09.428117   13835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:09.429893   13835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:31:09.433834 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:31:09.433846 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:31:09.496002 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:31:09.496024 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:31:09.538432 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:31:09.538457 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:31:09.599912 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:31:09.599933 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:31:12.117068 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:31:12.128268 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:31:12.128331 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:31:12.154851 1446402 cri.go:96] found id: ""
	I1222 00:31:12.154865 1446402 logs.go:282] 0 containers: []
	W1222 00:31:12.154873 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:31:12.154878 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:31:12.154961 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:31:12.180838 1446402 cri.go:96] found id: ""
	I1222 00:31:12.180852 1446402 logs.go:282] 0 containers: []
	W1222 00:31:12.180860 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:31:12.180865 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:31:12.180927 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:31:12.205653 1446402 cri.go:96] found id: ""
	I1222 00:31:12.205667 1446402 logs.go:282] 0 containers: []
	W1222 00:31:12.205683 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:31:12.205689 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:31:12.205760 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:31:12.232339 1446402 cri.go:96] found id: ""
	I1222 00:31:12.232352 1446402 logs.go:282] 0 containers: []
	W1222 00:31:12.232360 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:31:12.232365 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:31:12.232425 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:31:12.257997 1446402 cri.go:96] found id: ""
	I1222 00:31:12.258013 1446402 logs.go:282] 0 containers: []
	W1222 00:31:12.258020 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:31:12.258026 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:31:12.258113 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:31:12.282449 1446402 cri.go:96] found id: ""
	I1222 00:31:12.282464 1446402 logs.go:282] 0 containers: []
	W1222 00:31:12.282472 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:31:12.282478 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:31:12.282548 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:31:12.308351 1446402 cri.go:96] found id: ""
	I1222 00:31:12.308365 1446402 logs.go:282] 0 containers: []
	W1222 00:31:12.308372 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:31:12.308380 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:31:12.308391 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:31:12.365268 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:31:12.365286 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:31:12.381163 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:31:12.381180 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:31:12.448592 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:31:12.440189   13948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:12.440992   13948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:12.442650   13948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:12.443125   13948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:12.444728   13948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:31:12.440189   13948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:12.440992   13948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:12.442650   13948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:12.443125   13948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:12.444728   13948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:31:12.448603 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:31:12.448614 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:31:12.512421 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:31:12.512440 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:31:15.042734 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:31:15.076968 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:31:15.077038 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:31:15.105454 1446402 cri.go:96] found id: ""
	I1222 00:31:15.105469 1446402 logs.go:282] 0 containers: []
	W1222 00:31:15.105477 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:31:15.105484 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:31:15.105548 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:31:15.133491 1446402 cri.go:96] found id: ""
	I1222 00:31:15.133517 1446402 logs.go:282] 0 containers: []
	W1222 00:31:15.133525 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:31:15.133531 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:31:15.133610 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:31:15.161141 1446402 cri.go:96] found id: ""
	I1222 00:31:15.161155 1446402 logs.go:282] 0 containers: []
	W1222 00:31:15.161162 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:31:15.161168 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:31:15.161243 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:31:15.189035 1446402 cri.go:96] found id: ""
	I1222 00:31:15.189062 1446402 logs.go:282] 0 containers: []
	W1222 00:31:15.189071 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:31:15.189077 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:31:15.189153 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:31:15.215453 1446402 cri.go:96] found id: ""
	I1222 00:31:15.215467 1446402 logs.go:282] 0 containers: []
	W1222 00:31:15.215474 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:31:15.215479 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:31:15.215542 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:31:15.241518 1446402 cri.go:96] found id: ""
	I1222 00:31:15.241542 1446402 logs.go:282] 0 containers: []
	W1222 00:31:15.241550 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:31:15.241556 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:31:15.241627 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:31:15.270847 1446402 cri.go:96] found id: ""
	I1222 00:31:15.270862 1446402 logs.go:282] 0 containers: []
	W1222 00:31:15.270878 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:31:15.270886 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:31:15.270896 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:31:15.329892 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:31:15.329919 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:31:15.345769 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:31:15.345787 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:31:15.412686 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:31:15.403867   14050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:15.404399   14050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:15.406057   14050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:15.406571   14050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:15.408071   14050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:31:15.403867   14050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:15.404399   14050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:15.406057   14050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:15.406571   14050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:15.408071   14050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:31:15.412697 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:31:15.412708 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:31:15.475513 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:31:15.475533 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:31:18.013729 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:31:18.025498 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:31:18.025570 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:31:18.051451 1446402 cri.go:96] found id: ""
	I1222 00:31:18.051466 1446402 logs.go:282] 0 containers: []
	W1222 00:31:18.051473 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:31:18.051478 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:31:18.051540 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:31:18.078412 1446402 cri.go:96] found id: ""
	I1222 00:31:18.078428 1446402 logs.go:282] 0 containers: []
	W1222 00:31:18.078436 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:31:18.078442 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:31:18.078511 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:31:18.105039 1446402 cri.go:96] found id: ""
	I1222 00:31:18.105054 1446402 logs.go:282] 0 containers: []
	W1222 00:31:18.105062 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:31:18.105067 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:31:18.105129 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:31:18.132285 1446402 cri.go:96] found id: ""
	I1222 00:31:18.132300 1446402 logs.go:282] 0 containers: []
	W1222 00:31:18.132308 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:31:18.132314 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:31:18.132379 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:31:18.160762 1446402 cri.go:96] found id: ""
	I1222 00:31:18.160781 1446402 logs.go:282] 0 containers: []
	W1222 00:31:18.160788 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:31:18.160794 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:31:18.160855 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:31:18.187281 1446402 cri.go:96] found id: ""
	I1222 00:31:18.187295 1446402 logs.go:282] 0 containers: []
	W1222 00:31:18.187303 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:31:18.187308 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:31:18.187369 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:31:18.214033 1446402 cri.go:96] found id: ""
	I1222 00:31:18.214048 1446402 logs.go:282] 0 containers: []
	W1222 00:31:18.214055 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:31:18.214062 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:31:18.214072 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:31:18.274937 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:31:18.274957 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:31:18.291496 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:31:18.291514 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:31:18.356830 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:31:18.348577   14155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:18.349325   14155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:18.351002   14155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:18.351500   14155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:18.353025   14155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:31:18.348577   14155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:18.349325   14155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:18.351002   14155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:18.351500   14155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:18.353025   14155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:31:18.356841 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:31:18.356851 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:31:18.420006 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:31:18.420026 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:31:20.955836 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:31:20.966430 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:31:20.966499 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:31:20.992202 1446402 cri.go:96] found id: ""
	I1222 00:31:20.992216 1446402 logs.go:282] 0 containers: []
	W1222 00:31:20.992223 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:31:20.992229 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:31:20.992292 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:31:21.020435 1446402 cri.go:96] found id: ""
	I1222 00:31:21.020449 1446402 logs.go:282] 0 containers: []
	W1222 00:31:21.020456 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:31:21.020462 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:31:21.020525 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:31:21.045920 1446402 cri.go:96] found id: ""
	I1222 00:31:21.045934 1446402 logs.go:282] 0 containers: []
	W1222 00:31:21.045940 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:31:21.045945 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:31:21.046007 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:31:21.069898 1446402 cri.go:96] found id: ""
	I1222 00:31:21.069912 1446402 logs.go:282] 0 containers: []
	W1222 00:31:21.069920 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:31:21.069926 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:31:21.069986 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:31:21.096061 1446402 cri.go:96] found id: ""
	I1222 00:31:21.096075 1446402 logs.go:282] 0 containers: []
	W1222 00:31:21.096082 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:31:21.096088 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:31:21.096152 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:31:21.121380 1446402 cri.go:96] found id: ""
	I1222 00:31:21.121394 1446402 logs.go:282] 0 containers: []
	W1222 00:31:21.121401 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:31:21.121407 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:31:21.121473 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:31:21.147060 1446402 cri.go:96] found id: ""
	I1222 00:31:21.147083 1446402 logs.go:282] 0 containers: []
	W1222 00:31:21.147091 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:31:21.147098 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:31:21.147110 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:31:21.163066 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:31:21.163085 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:31:21.229457 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:31:21.220665   14257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:21.221438   14257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:21.223287   14257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:21.223780   14257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:21.225557   14257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:31:21.220665   14257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:21.221438   14257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:21.223287   14257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:21.223780   14257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:21.225557   14257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:31:21.229467 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:31:21.229482 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:31:21.296323 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:31:21.296342 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:31:21.329392 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:31:21.329409 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:31:23.886587 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:31:23.896889 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:31:23.896949 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:31:23.921855 1446402 cri.go:96] found id: ""
	I1222 00:31:23.921870 1446402 logs.go:282] 0 containers: []
	W1222 00:31:23.921878 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:31:23.921883 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:31:23.921943 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:31:23.947445 1446402 cri.go:96] found id: ""
	I1222 00:31:23.947459 1446402 logs.go:282] 0 containers: []
	W1222 00:31:23.947466 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:31:23.947471 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:31:23.947532 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:31:23.973150 1446402 cri.go:96] found id: ""
	I1222 00:31:23.973164 1446402 logs.go:282] 0 containers: []
	W1222 00:31:23.973171 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:31:23.973176 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:31:23.973236 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:31:24.000119 1446402 cri.go:96] found id: ""
	I1222 00:31:24.000133 1446402 logs.go:282] 0 containers: []
	W1222 00:31:24.000140 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:31:24.000145 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:31:24.000208 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:31:24.028319 1446402 cri.go:96] found id: ""
	I1222 00:31:24.028333 1446402 logs.go:282] 0 containers: []
	W1222 00:31:24.028341 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:31:24.028346 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:31:24.028416 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:31:24.054514 1446402 cri.go:96] found id: ""
	I1222 00:31:24.054528 1446402 logs.go:282] 0 containers: []
	W1222 00:31:24.054536 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:31:24.054541 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:31:24.054623 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:31:24.079783 1446402 cri.go:96] found id: ""
	I1222 00:31:24.079796 1446402 logs.go:282] 0 containers: []
	W1222 00:31:24.079804 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:31:24.079812 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:31:24.079823 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:31:24.136543 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:31:24.136563 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:31:24.152385 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:31:24.152402 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:31:24.219394 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:31:24.210731   14362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:24.211515   14362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:24.213293   14362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:24.213872   14362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:24.215420   14362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:31:24.210731   14362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:24.211515   14362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:24.213293   14362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:24.213872   14362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:24.215420   14362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:31:24.219403 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:31:24.219413 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:31:24.282313 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:31:24.282331 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:31:26.811961 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:31:26.822374 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:31:26.822443 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:31:26.851730 1446402 cri.go:96] found id: ""
	I1222 00:31:26.851745 1446402 logs.go:282] 0 containers: []
	W1222 00:31:26.851753 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:31:26.851758 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:31:26.851820 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:31:26.876518 1446402 cri.go:96] found id: ""
	I1222 00:31:26.876533 1446402 logs.go:282] 0 containers: []
	W1222 00:31:26.876540 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:31:26.876545 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:31:26.876614 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:31:26.906243 1446402 cri.go:96] found id: ""
	I1222 00:31:26.906258 1446402 logs.go:282] 0 containers: []
	W1222 00:31:26.906265 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:31:26.906271 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:31:26.906332 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:31:26.933029 1446402 cri.go:96] found id: ""
	I1222 00:31:26.933043 1446402 logs.go:282] 0 containers: []
	W1222 00:31:26.933050 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:31:26.933056 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:31:26.933124 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:31:26.962389 1446402 cri.go:96] found id: ""
	I1222 00:31:26.962404 1446402 logs.go:282] 0 containers: []
	W1222 00:31:26.962411 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:31:26.962417 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:31:26.962478 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:31:26.986566 1446402 cri.go:96] found id: ""
	I1222 00:31:26.986579 1446402 logs.go:282] 0 containers: []
	W1222 00:31:26.986587 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:31:26.986593 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:31:26.986654 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:31:27.013857 1446402 cri.go:96] found id: ""
	I1222 00:31:27.013872 1446402 logs.go:282] 0 containers: []
	W1222 00:31:27.013885 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:31:27.013896 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:31:27.013907 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:31:27.072155 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:31:27.072174 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:31:27.088000 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:31:27.088018 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:31:27.155219 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:31:27.146314   14467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:27.147060   14467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:27.148896   14467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:27.149539   14467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:27.151255   14467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:31:27.146314   14467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:27.147060   14467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:27.148896   14467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:27.149539   14467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:27.151255   14467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:31:27.155229 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:31:27.155240 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:31:27.220122 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:31:27.220142 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:31:29.756602 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:31:29.767503 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:31:29.767576 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:31:29.796758 1446402 cri.go:96] found id: ""
	I1222 00:31:29.796773 1446402 logs.go:282] 0 containers: []
	W1222 00:31:29.796781 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:31:29.796786 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:31:29.796848 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:31:29.826111 1446402 cri.go:96] found id: ""
	I1222 00:31:29.826125 1446402 logs.go:282] 0 containers: []
	W1222 00:31:29.826133 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:31:29.826138 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:31:29.826199 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:31:29.851803 1446402 cri.go:96] found id: ""
	I1222 00:31:29.851817 1446402 logs.go:282] 0 containers: []
	W1222 00:31:29.851827 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:31:29.851833 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:31:29.851893 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:31:29.877952 1446402 cri.go:96] found id: ""
	I1222 00:31:29.877966 1446402 logs.go:282] 0 containers: []
	W1222 00:31:29.877973 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:31:29.877979 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:31:29.878041 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:31:29.902393 1446402 cri.go:96] found id: ""
	I1222 00:31:29.902406 1446402 logs.go:282] 0 containers: []
	W1222 00:31:29.902414 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:31:29.902419 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:31:29.902499 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:31:29.930875 1446402 cri.go:96] found id: ""
	I1222 00:31:29.930889 1446402 logs.go:282] 0 containers: []
	W1222 00:31:29.930896 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:31:29.930901 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:31:29.930961 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:31:29.954467 1446402 cri.go:96] found id: ""
	I1222 00:31:29.954481 1446402 logs.go:282] 0 containers: []
	W1222 00:31:29.954488 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:31:29.954496 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:31:29.954506 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:31:30.022300 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:31:30.022322 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:31:30.101450 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:31:30.101468 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:31:30.160615 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:31:30.160637 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:31:30.177543 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:31:30.177570 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:31:30.250821 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:31:30.242174   14583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:30.242862   14583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:30.243862   14583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:30.244583   14583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:30.246440   14583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:31:30.242174   14583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:30.242862   14583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:30.243862   14583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:30.244583   14583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:30.246440   14583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:31:32.751739 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:31:32.762856 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:31:32.762918 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:31:32.788176 1446402 cri.go:96] found id: ""
	I1222 00:31:32.788191 1446402 logs.go:282] 0 containers: []
	W1222 00:31:32.788197 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:31:32.788203 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:31:32.788264 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:31:32.815561 1446402 cri.go:96] found id: ""
	I1222 00:31:32.815575 1446402 logs.go:282] 0 containers: []
	W1222 00:31:32.815582 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:31:32.815587 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:31:32.815648 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:31:32.840208 1446402 cri.go:96] found id: ""
	I1222 00:31:32.840222 1446402 logs.go:282] 0 containers: []
	W1222 00:31:32.840229 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:31:32.840235 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:31:32.840298 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:31:32.865041 1446402 cri.go:96] found id: ""
	I1222 00:31:32.865055 1446402 logs.go:282] 0 containers: []
	W1222 00:31:32.865062 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:31:32.865068 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:31:32.865127 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:31:32.891852 1446402 cri.go:96] found id: ""
	I1222 00:31:32.891871 1446402 logs.go:282] 0 containers: []
	W1222 00:31:32.891879 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:31:32.891884 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:31:32.891956 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:31:32.916991 1446402 cri.go:96] found id: ""
	I1222 00:31:32.917005 1446402 logs.go:282] 0 containers: []
	W1222 00:31:32.917013 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:31:32.917018 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:31:32.917078 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:31:32.944551 1446402 cri.go:96] found id: ""
	I1222 00:31:32.944564 1446402 logs.go:282] 0 containers: []
	W1222 00:31:32.944571 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:31:32.944579 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:31:32.944589 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:31:33.001246 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:31:33.001270 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:31:33.021275 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:31:33.021294 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:31:33.093331 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:31:33.085315   14674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:33.086145   14674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:33.087144   14674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:33.087857   14674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:33.089415   14674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:31:33.085315   14674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:33.086145   14674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:33.087144   14674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:33.087857   14674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:33.089415   14674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:31:33.093342 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:31:33.093353 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:31:33.155921 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:31:33.155942 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:31:35.686392 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:31:35.696748 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:31:35.696809 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:31:35.721721 1446402 cri.go:96] found id: ""
	I1222 00:31:35.721736 1446402 logs.go:282] 0 containers: []
	W1222 00:31:35.721743 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:31:35.721748 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:31:35.721836 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:31:35.769211 1446402 cri.go:96] found id: ""
	I1222 00:31:35.769225 1446402 logs.go:282] 0 containers: []
	W1222 00:31:35.769232 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:31:35.769237 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:31:35.769296 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:31:35.801836 1446402 cri.go:96] found id: ""
	I1222 00:31:35.801850 1446402 logs.go:282] 0 containers: []
	W1222 00:31:35.801857 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:31:35.801863 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:31:35.801925 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:31:35.829689 1446402 cri.go:96] found id: ""
	I1222 00:31:35.829703 1446402 logs.go:282] 0 containers: []
	W1222 00:31:35.829711 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:31:35.829716 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:31:35.829775 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:31:35.855388 1446402 cri.go:96] found id: ""
	I1222 00:31:35.855403 1446402 logs.go:282] 0 containers: []
	W1222 00:31:35.855411 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:31:35.855417 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:31:35.855478 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:31:35.886055 1446402 cri.go:96] found id: ""
	I1222 00:31:35.886070 1446402 logs.go:282] 0 containers: []
	W1222 00:31:35.886105 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:31:35.886112 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:31:35.886177 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:31:35.911567 1446402 cri.go:96] found id: ""
	I1222 00:31:35.911581 1446402 logs.go:282] 0 containers: []
	W1222 00:31:35.911589 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:31:35.911596 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:31:35.911608 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:31:35.978738 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:31:35.969873   14773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:35.970644   14773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:35.972383   14773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:35.972972   14773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:35.974691   14773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:31:35.969873   14773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:35.970644   14773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:35.972383   14773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:35.972972   14773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:35.974691   14773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:31:35.978748 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:31:35.978761 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:31:36.043835 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:31:36.043857 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:31:36.072278 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:31:36.072294 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:31:36.133943 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:31:36.133963 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:31:38.650565 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:31:38.660954 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:31:38.661027 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:31:38.685765 1446402 cri.go:96] found id: ""
	I1222 00:31:38.685780 1446402 logs.go:282] 0 containers: []
	W1222 00:31:38.685787 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:31:38.685793 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:31:38.685859 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:31:38.711272 1446402 cri.go:96] found id: ""
	I1222 00:31:38.711287 1446402 logs.go:282] 0 containers: []
	W1222 00:31:38.711295 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:31:38.711300 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:31:38.711366 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:31:38.739201 1446402 cri.go:96] found id: ""
	I1222 00:31:38.739217 1446402 logs.go:282] 0 containers: []
	W1222 00:31:38.739224 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:31:38.739230 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:31:38.739299 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:31:38.769400 1446402 cri.go:96] found id: ""
	I1222 00:31:38.769414 1446402 logs.go:282] 0 containers: []
	W1222 00:31:38.769421 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:31:38.769426 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:31:38.769486 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:31:38.805681 1446402 cri.go:96] found id: ""
	I1222 00:31:38.805695 1446402 logs.go:282] 0 containers: []
	W1222 00:31:38.805704 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:31:38.805709 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:31:38.805770 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:31:38.831145 1446402 cri.go:96] found id: ""
	I1222 00:31:38.831160 1446402 logs.go:282] 0 containers: []
	W1222 00:31:38.831167 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:31:38.831172 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:31:38.831233 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:31:38.861111 1446402 cri.go:96] found id: ""
	I1222 00:31:38.861125 1446402 logs.go:282] 0 containers: []
	W1222 00:31:38.861132 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:31:38.861140 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:31:38.861150 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:31:38.917581 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:31:38.917601 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:31:38.934979 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:31:38.934997 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:31:39.009642 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:31:38.997287   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:38.997885   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:38.999441   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:39.000040   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:39.004739   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:31:38.997287   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:38.997885   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:38.999441   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:39.000040   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:39.004739   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:31:39.009654 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:31:39.009666 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:31:39.079837 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:31:39.079866 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:31:41.610509 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:31:41.620849 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:31:41.620915 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:31:41.645625 1446402 cri.go:96] found id: ""
	I1222 00:31:41.645639 1446402 logs.go:282] 0 containers: []
	W1222 00:31:41.645647 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:31:41.645652 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:31:41.645715 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:31:41.671325 1446402 cri.go:96] found id: ""
	I1222 00:31:41.671339 1446402 logs.go:282] 0 containers: []
	W1222 00:31:41.671347 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:31:41.671353 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:31:41.671413 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:31:41.695685 1446402 cri.go:96] found id: ""
	I1222 00:31:41.695699 1446402 logs.go:282] 0 containers: []
	W1222 00:31:41.695706 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:31:41.695712 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:31:41.695772 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:31:41.721021 1446402 cri.go:96] found id: ""
	I1222 00:31:41.721034 1446402 logs.go:282] 0 containers: []
	W1222 00:31:41.721042 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:31:41.721047 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:31:41.721108 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:31:41.757975 1446402 cri.go:96] found id: ""
	I1222 00:31:41.757990 1446402 logs.go:282] 0 containers: []
	W1222 00:31:41.757997 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:31:41.758002 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:31:41.758064 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:31:41.802251 1446402 cri.go:96] found id: ""
	I1222 00:31:41.802266 1446402 logs.go:282] 0 containers: []
	W1222 00:31:41.802273 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:31:41.802279 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:31:41.802339 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:31:41.835417 1446402 cri.go:96] found id: ""
	I1222 00:31:41.835433 1446402 logs.go:282] 0 containers: []
	W1222 00:31:41.835439 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:31:41.835447 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:31:41.835458 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:31:41.895808 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:31:41.895827 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:31:41.911760 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:31:41.911776 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:31:41.978878 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:31:41.969798   14988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:41.970501   14988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:41.972132   14988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:41.972665   14988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:41.974279   14988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:31:41.969798   14988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:41.970501   14988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:41.972132   14988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:41.972665   14988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:41.974279   14988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:31:41.978889 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:31:41.978900 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:31:42.043394 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:31:42.043415 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:31:44.576818 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:31:44.587175 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:31:44.587239 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:31:44.613386 1446402 cri.go:96] found id: ""
	I1222 00:31:44.613404 1446402 logs.go:282] 0 containers: []
	W1222 00:31:44.613411 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:31:44.613416 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:31:44.613479 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:31:44.642424 1446402 cri.go:96] found id: ""
	I1222 00:31:44.642444 1446402 logs.go:282] 0 containers: []
	W1222 00:31:44.642451 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:31:44.642456 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:31:44.642517 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:31:44.671623 1446402 cri.go:96] found id: ""
	I1222 00:31:44.671637 1446402 logs.go:282] 0 containers: []
	W1222 00:31:44.671645 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:31:44.671650 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:31:44.671720 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:31:44.697114 1446402 cri.go:96] found id: ""
	I1222 00:31:44.697128 1446402 logs.go:282] 0 containers: []
	W1222 00:31:44.697135 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:31:44.697140 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:31:44.697199 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:31:44.724199 1446402 cri.go:96] found id: ""
	I1222 00:31:44.724213 1446402 logs.go:282] 0 containers: []
	W1222 00:31:44.724220 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:31:44.724226 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:31:44.724298 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:31:44.765403 1446402 cri.go:96] found id: ""
	I1222 00:31:44.765417 1446402 logs.go:282] 0 containers: []
	W1222 00:31:44.765436 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:31:44.765443 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:31:44.765510 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:31:44.795984 1446402 cri.go:96] found id: ""
	I1222 00:31:44.795999 1446402 logs.go:282] 0 containers: []
	W1222 00:31:44.796017 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:31:44.796026 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:31:44.796037 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:31:44.855400 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:31:44.855420 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:31:44.872483 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:31:44.872501 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:31:44.941437 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:31:44.933277   15095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:44.933850   15095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:44.935365   15095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:44.935699   15095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:44.937038   15095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:31:44.933277   15095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:44.933850   15095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:44.935365   15095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:44.935699   15095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:44.937038   15095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:31:44.941449 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:31:44.941460 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:31:45.004528 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:31:45.004550 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:31:47.556363 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:31:47.566634 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:31:47.566695 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:31:47.593291 1446402 cri.go:96] found id: ""
	I1222 00:31:47.593305 1446402 logs.go:282] 0 containers: []
	W1222 00:31:47.593312 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:31:47.593318 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:31:47.593387 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:31:47.617921 1446402 cri.go:96] found id: ""
	I1222 00:31:47.617935 1446402 logs.go:282] 0 containers: []
	W1222 00:31:47.617942 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:31:47.617947 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:31:47.618007 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:31:47.644745 1446402 cri.go:96] found id: ""
	I1222 00:31:47.644759 1446402 logs.go:282] 0 containers: []
	W1222 00:31:47.644766 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:31:47.644772 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:31:47.644831 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:31:47.669635 1446402 cri.go:96] found id: ""
	I1222 00:31:47.669649 1446402 logs.go:282] 0 containers: []
	W1222 00:31:47.669656 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:31:47.669661 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:31:47.669721 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:31:47.696237 1446402 cri.go:96] found id: ""
	I1222 00:31:47.696251 1446402 logs.go:282] 0 containers: []
	W1222 00:31:47.696258 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:31:47.696263 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:31:47.696321 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:31:47.720858 1446402 cri.go:96] found id: ""
	I1222 00:31:47.720877 1446402 logs.go:282] 0 containers: []
	W1222 00:31:47.720884 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:31:47.720890 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:31:47.720950 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:31:47.759042 1446402 cri.go:96] found id: ""
	I1222 00:31:47.759056 1446402 logs.go:282] 0 containers: []
	W1222 00:31:47.759064 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:31:47.759071 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:31:47.759088 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:31:47.775637 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:31:47.775652 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:31:47.848304 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:31:47.837609   15197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:47.838360   15197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:47.840008   15197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:47.842174   15197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:47.842911   15197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:31:47.837609   15197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:47.838360   15197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:47.840008   15197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:47.842174   15197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:47.842911   15197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:31:47.848314 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:31:47.848326 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:31:47.910821 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:31:47.910839 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:31:47.939115 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:31:47.939131 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:31:50.495637 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:31:50.506061 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:31:50.506147 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:31:50.531619 1446402 cri.go:96] found id: ""
	I1222 00:31:50.531634 1446402 logs.go:282] 0 containers: []
	W1222 00:31:50.531641 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:31:50.531647 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:31:50.531707 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:31:50.556202 1446402 cri.go:96] found id: ""
	I1222 00:31:50.556215 1446402 logs.go:282] 0 containers: []
	W1222 00:31:50.556222 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:31:50.556228 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:31:50.556289 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:31:50.580637 1446402 cri.go:96] found id: ""
	I1222 00:31:50.580651 1446402 logs.go:282] 0 containers: []
	W1222 00:31:50.580658 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:31:50.580663 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:31:50.580726 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:31:50.605112 1446402 cri.go:96] found id: ""
	I1222 00:31:50.605126 1446402 logs.go:282] 0 containers: []
	W1222 00:31:50.605133 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:31:50.605138 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:31:50.605198 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:31:50.629268 1446402 cri.go:96] found id: ""
	I1222 00:31:50.629283 1446402 logs.go:282] 0 containers: []
	W1222 00:31:50.629290 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:31:50.629295 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:31:50.629356 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:31:50.655550 1446402 cri.go:96] found id: ""
	I1222 00:31:50.655564 1446402 logs.go:282] 0 containers: []
	W1222 00:31:50.655571 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:31:50.655576 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:31:50.655635 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:31:50.683838 1446402 cri.go:96] found id: ""
	I1222 00:31:50.683852 1446402 logs.go:282] 0 containers: []
	W1222 00:31:50.683859 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:31:50.683866 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:31:50.683877 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:31:50.739538 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:31:50.739556 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:31:50.759933 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:31:50.759948 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:31:50.837166 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:31:50.827625   15305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:50.828436   15305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:50.830329   15305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:50.830912   15305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:50.832535   15305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:31:50.827625   15305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:50.828436   15305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:50.830329   15305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:50.830912   15305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:50.832535   15305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:31:50.837177 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:31:50.837188 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:31:50.902694 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:31:50.902713 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:31:53.430394 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:31:53.441567 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:31:53.441627 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:31:53.468013 1446402 cri.go:96] found id: ""
	I1222 00:31:53.468027 1446402 logs.go:282] 0 containers: []
	W1222 00:31:53.468034 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:31:53.468039 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:31:53.468109 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:31:53.494162 1446402 cri.go:96] found id: ""
	I1222 00:31:53.494176 1446402 logs.go:282] 0 containers: []
	W1222 00:31:53.494183 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:31:53.494188 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:31:53.494248 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:31:53.524039 1446402 cri.go:96] found id: ""
	I1222 00:31:53.524061 1446402 logs.go:282] 0 containers: []
	W1222 00:31:53.524068 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:31:53.524074 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:31:53.524137 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:31:53.548965 1446402 cri.go:96] found id: ""
	I1222 00:31:53.548979 1446402 logs.go:282] 0 containers: []
	W1222 00:31:53.548987 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:31:53.548992 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:31:53.549054 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:31:53.580216 1446402 cri.go:96] found id: ""
	I1222 00:31:53.580231 1446402 logs.go:282] 0 containers: []
	W1222 00:31:53.580238 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:31:53.580244 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:31:53.580304 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:31:53.605286 1446402 cri.go:96] found id: ""
	I1222 00:31:53.605301 1446402 logs.go:282] 0 containers: []
	W1222 00:31:53.605308 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:31:53.605314 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:31:53.605391 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:31:53.630900 1446402 cri.go:96] found id: ""
	I1222 00:31:53.630915 1446402 logs.go:282] 0 containers: []
	W1222 00:31:53.630922 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:31:53.630930 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:31:53.630940 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:31:53.686921 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:31:53.686939 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:31:53.704267 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:31:53.704290 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:31:53.789032 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:31:53.778364   15410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:53.780153   15410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:53.781189   15410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:53.783313   15410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:53.784665   15410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:31:53.778364   15410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:53.780153   15410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:53.781189   15410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:53.783313   15410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:53.784665   15410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:31:53.789043 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:31:53.789054 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:31:53.855439 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:31:53.855459 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:31:56.386602 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:31:56.396636 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:31:56.396695 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:31:56.419622 1446402 cri.go:96] found id: ""
	I1222 00:31:56.419635 1446402 logs.go:282] 0 containers: []
	W1222 00:31:56.419642 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:31:56.419647 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:31:56.419711 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:31:56.443068 1446402 cri.go:96] found id: ""
	I1222 00:31:56.443082 1446402 logs.go:282] 0 containers: []
	W1222 00:31:56.443088 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:31:56.443094 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:31:56.443151 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:31:56.468547 1446402 cri.go:96] found id: ""
	I1222 00:31:56.468561 1446402 logs.go:282] 0 containers: []
	W1222 00:31:56.468568 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:31:56.468573 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:31:56.468639 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:31:56.496420 1446402 cri.go:96] found id: ""
	I1222 00:31:56.496434 1446402 logs.go:282] 0 containers: []
	W1222 00:31:56.496448 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:31:56.496453 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:31:56.496515 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:31:56.521822 1446402 cri.go:96] found id: ""
	I1222 00:31:56.521837 1446402 logs.go:282] 0 containers: []
	W1222 00:31:56.521844 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:31:56.521849 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:31:56.521910 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:31:56.548113 1446402 cri.go:96] found id: ""
	I1222 00:31:56.548127 1446402 logs.go:282] 0 containers: []
	W1222 00:31:56.548135 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:31:56.548142 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:31:56.548205 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:31:56.577150 1446402 cri.go:96] found id: ""
	I1222 00:31:56.577166 1446402 logs.go:282] 0 containers: []
	W1222 00:31:56.577173 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:31:56.577181 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:31:56.577191 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:31:56.635797 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:31:56.635817 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:31:56.651214 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:31:56.651230 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:31:56.716938 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:31:56.708096   15515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:56.708823   15515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:56.710450   15515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:56.710968   15515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:56.712514   15515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:31:56.708096   15515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:56.708823   15515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:56.710450   15515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:56.710968   15515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:56.712514   15515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:31:56.716948 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:31:56.716959 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:31:56.780730 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:31:56.780749 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:31:59.308156 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:31:59.318415 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:31:59.318476 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:31:59.343305 1446402 cri.go:96] found id: ""
	I1222 00:31:59.343319 1446402 logs.go:282] 0 containers: []
	W1222 00:31:59.343326 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:31:59.343332 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:31:59.343390 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:31:59.368501 1446402 cri.go:96] found id: ""
	I1222 00:31:59.368515 1446402 logs.go:282] 0 containers: []
	W1222 00:31:59.368523 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:31:59.368529 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:31:59.368595 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:31:59.394364 1446402 cri.go:96] found id: ""
	I1222 00:31:59.394378 1446402 logs.go:282] 0 containers: []
	W1222 00:31:59.394385 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:31:59.394391 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:31:59.394452 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:31:59.420068 1446402 cri.go:96] found id: ""
	I1222 00:31:59.420082 1446402 logs.go:282] 0 containers: []
	W1222 00:31:59.420089 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:31:59.420094 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:31:59.420160 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:31:59.444153 1446402 cri.go:96] found id: ""
	I1222 00:31:59.444167 1446402 logs.go:282] 0 containers: []
	W1222 00:31:59.444174 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:31:59.444179 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:31:59.444239 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:31:59.473812 1446402 cri.go:96] found id: ""
	I1222 00:31:59.473827 1446402 logs.go:282] 0 containers: []
	W1222 00:31:59.473834 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:31:59.473840 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:31:59.473901 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:31:59.502392 1446402 cri.go:96] found id: ""
	I1222 00:31:59.502405 1446402 logs.go:282] 0 containers: []
	W1222 00:31:59.502412 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:31:59.502420 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:31:59.502429 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:31:59.564094 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:31:59.564114 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:31:59.596168 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:31:59.596186 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:31:59.652216 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:31:59.652236 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:31:59.668263 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:31:59.668278 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:31:59.729801 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:31:59.722174   15630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:59.722594   15630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:59.724162   15630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:59.724501   15630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:59.726011   15630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:31:59.722174   15630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:59.722594   15630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:59.724162   15630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:59.724501   15630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:31:59.726011   15630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:32:02.230111 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:32:02.241018 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:32:02.241081 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:32:02.266489 1446402 cri.go:96] found id: ""
	I1222 00:32:02.266506 1446402 logs.go:282] 0 containers: []
	W1222 00:32:02.266514 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:32:02.266522 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:32:02.266583 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:32:02.291427 1446402 cri.go:96] found id: ""
	I1222 00:32:02.291451 1446402 logs.go:282] 0 containers: []
	W1222 00:32:02.291459 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:32:02.291465 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:32:02.291532 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:32:02.317575 1446402 cri.go:96] found id: ""
	I1222 00:32:02.317599 1446402 logs.go:282] 0 containers: []
	W1222 00:32:02.317607 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:32:02.317612 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:32:02.317683 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:32:02.346894 1446402 cri.go:96] found id: ""
	I1222 00:32:02.346918 1446402 logs.go:282] 0 containers: []
	W1222 00:32:02.346926 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:32:02.346932 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:32:02.347004 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:32:02.373650 1446402 cri.go:96] found id: ""
	I1222 00:32:02.373676 1446402 logs.go:282] 0 containers: []
	W1222 00:32:02.373683 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:32:02.373689 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:32:02.373758 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:32:02.398320 1446402 cri.go:96] found id: ""
	I1222 00:32:02.398334 1446402 logs.go:282] 0 containers: []
	W1222 00:32:02.398341 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:32:02.398347 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:32:02.398416 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:32:02.430114 1446402 cri.go:96] found id: ""
	I1222 00:32:02.430128 1446402 logs.go:282] 0 containers: []
	W1222 00:32:02.430136 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:32:02.430144 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:32:02.430154 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:32:02.485528 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:32:02.485549 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:32:02.501732 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:32:02.501748 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:32:02.566784 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:32:02.558532   15720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:02.559242   15720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:02.560819   15720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:02.561301   15720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:02.562756   15720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:32:02.558532   15720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:02.559242   15720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:02.560819   15720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:02.561301   15720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:02.562756   15720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:32:02.566793 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:32:02.566804 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:32:02.631159 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:32:02.631178 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:32:05.163426 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:32:05.173887 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:32:05.173961 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:32:05.199160 1446402 cri.go:96] found id: ""
	I1222 00:32:05.199174 1446402 logs.go:282] 0 containers: []
	W1222 00:32:05.199181 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:32:05.199187 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:32:05.199257 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:32:05.223620 1446402 cri.go:96] found id: ""
	I1222 00:32:05.223634 1446402 logs.go:282] 0 containers: []
	W1222 00:32:05.223641 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:32:05.223647 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:32:05.223706 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:32:05.248870 1446402 cri.go:96] found id: ""
	I1222 00:32:05.248885 1446402 logs.go:282] 0 containers: []
	W1222 00:32:05.248893 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:32:05.248898 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:32:05.248961 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:32:05.274824 1446402 cri.go:96] found id: ""
	I1222 00:32:05.274839 1446402 logs.go:282] 0 containers: []
	W1222 00:32:05.274846 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:32:05.274851 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:32:05.274910 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:32:05.300225 1446402 cri.go:96] found id: ""
	I1222 00:32:05.300239 1446402 logs.go:282] 0 containers: []
	W1222 00:32:05.300251 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:32:05.300257 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:32:05.300317 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:32:05.324470 1446402 cri.go:96] found id: ""
	I1222 00:32:05.324484 1446402 logs.go:282] 0 containers: []
	W1222 00:32:05.324492 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:32:05.324500 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:32:05.324563 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:32:05.352629 1446402 cri.go:96] found id: ""
	I1222 00:32:05.352647 1446402 logs.go:282] 0 containers: []
	W1222 00:32:05.352655 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:32:05.352666 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:32:05.352677 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:32:05.415991 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:32:05.416014 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:32:05.431828 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:32:05.431845 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:32:05.498339 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:32:05.489677   15826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:05.490403   15826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:05.492216   15826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:05.492835   15826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:05.494413   15826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:32:05.489677   15826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:05.490403   15826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:05.492216   15826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:05.492835   15826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:05.494413   15826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:32:05.498349 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:32:05.498364 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:32:05.563506 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:32:05.563525 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:32:08.094246 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:32:08.105089 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:32:08.105172 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:32:08.132175 1446402 cri.go:96] found id: ""
	I1222 00:32:08.132203 1446402 logs.go:282] 0 containers: []
	W1222 00:32:08.132211 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:32:08.132217 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:32:08.132280 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:32:08.158101 1446402 cri.go:96] found id: ""
	I1222 00:32:08.158115 1446402 logs.go:282] 0 containers: []
	W1222 00:32:08.158122 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:32:08.158128 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:32:08.158204 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:32:08.187238 1446402 cri.go:96] found id: ""
	I1222 00:32:08.187252 1446402 logs.go:282] 0 containers: []
	W1222 00:32:08.187259 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:32:08.187265 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:32:08.187325 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:32:08.211742 1446402 cri.go:96] found id: ""
	I1222 00:32:08.211756 1446402 logs.go:282] 0 containers: []
	W1222 00:32:08.211763 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:32:08.211768 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:32:08.211830 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:32:08.236099 1446402 cri.go:96] found id: ""
	I1222 00:32:08.236113 1446402 logs.go:282] 0 containers: []
	W1222 00:32:08.236120 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:32:08.236126 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:32:08.236199 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:32:08.261393 1446402 cri.go:96] found id: ""
	I1222 00:32:08.261407 1446402 logs.go:282] 0 containers: []
	W1222 00:32:08.261424 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:32:08.261430 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:32:08.261498 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:32:08.288417 1446402 cri.go:96] found id: ""
	I1222 00:32:08.288439 1446402 logs.go:282] 0 containers: []
	W1222 00:32:08.288447 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:32:08.288456 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:32:08.288467 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:32:08.304103 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:32:08.304124 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:32:08.368642 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:32:08.359974   15929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:08.360518   15929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:08.362313   15929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:08.362653   15929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:08.364223   15929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:32:08.359974   15929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:08.360518   15929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:08.362313   15929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:08.362653   15929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:08.364223   15929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:32:08.368652 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:32:08.368663 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:32:08.430523 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:32:08.430543 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:32:08.458205 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:32:08.458222 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:32:11.020855 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:32:11.033129 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:32:11.033201 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:32:11.063371 1446402 cri.go:96] found id: ""
	I1222 00:32:11.063385 1446402 logs.go:282] 0 containers: []
	W1222 00:32:11.063392 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:32:11.063398 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:32:11.063479 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:32:11.089853 1446402 cri.go:96] found id: ""
	I1222 00:32:11.089880 1446402 logs.go:282] 0 containers: []
	W1222 00:32:11.089891 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:32:11.089898 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:32:11.089971 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:32:11.120928 1446402 cri.go:96] found id: ""
	I1222 00:32:11.120943 1446402 logs.go:282] 0 containers: []
	W1222 00:32:11.120971 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:32:11.120978 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:32:11.121045 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:32:11.151464 1446402 cri.go:96] found id: ""
	I1222 00:32:11.151502 1446402 logs.go:282] 0 containers: []
	W1222 00:32:11.151510 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:32:11.151516 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:32:11.151589 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:32:11.179209 1446402 cri.go:96] found id: ""
	I1222 00:32:11.179224 1446402 logs.go:282] 0 containers: []
	W1222 00:32:11.179233 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:32:11.179238 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:32:11.179324 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:32:11.205945 1446402 cri.go:96] found id: ""
	I1222 00:32:11.205979 1446402 logs.go:282] 0 containers: []
	W1222 00:32:11.205987 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:32:11.205993 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:32:11.206065 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:32:11.231928 1446402 cri.go:96] found id: ""
	I1222 00:32:11.231942 1446402 logs.go:282] 0 containers: []
	W1222 00:32:11.231949 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:32:11.231957 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:32:11.231967 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:32:11.296038 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:32:11.296064 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:32:11.312748 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:32:11.312764 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:32:11.378465 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:32:11.369616   16034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:11.370358   16034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:11.372167   16034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:11.372861   16034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:11.374622   16034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:32:11.369616   16034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:11.370358   16034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:11.372167   16034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:11.372861   16034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:11.374622   16034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:32:11.378480 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:32:11.378499 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:32:11.444244 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:32:11.444264 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:32:13.977331 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:32:13.989011 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:32:13.989094 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:32:14.028691 1446402 cri.go:96] found id: ""
	I1222 00:32:14.028726 1446402 logs.go:282] 0 containers: []
	W1222 00:32:14.028734 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:32:14.028739 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:32:14.028810 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:32:14.055710 1446402 cri.go:96] found id: ""
	I1222 00:32:14.055725 1446402 logs.go:282] 0 containers: []
	W1222 00:32:14.055732 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:32:14.055738 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:32:14.055809 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:32:14.082530 1446402 cri.go:96] found id: ""
	I1222 00:32:14.082546 1446402 logs.go:282] 0 containers: []
	W1222 00:32:14.082553 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:32:14.082559 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:32:14.082625 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:32:14.107817 1446402 cri.go:96] found id: ""
	I1222 00:32:14.107840 1446402 logs.go:282] 0 containers: []
	W1222 00:32:14.107847 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:32:14.107853 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:32:14.107913 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:32:14.136680 1446402 cri.go:96] found id: ""
	I1222 00:32:14.136695 1446402 logs.go:282] 0 containers: []
	W1222 00:32:14.136701 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:32:14.136707 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:32:14.136767 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:32:14.161938 1446402 cri.go:96] found id: ""
	I1222 00:32:14.161961 1446402 logs.go:282] 0 containers: []
	W1222 00:32:14.161968 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:32:14.161974 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:32:14.162041 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:32:14.186794 1446402 cri.go:96] found id: ""
	I1222 00:32:14.186808 1446402 logs.go:282] 0 containers: []
	W1222 00:32:14.186814 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:32:14.186823 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:32:14.186832 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:32:14.242688 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:32:14.242708 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:32:14.259715 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:32:14.259732 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:32:14.326979 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:32:14.317786   16135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:14.319196   16135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:14.319865   16135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:14.321398   16135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:14.321854   16135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:32:14.317786   16135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:14.319196   16135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:14.319865   16135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:14.321398   16135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:14.321854   16135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:32:14.326990 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:32:14.327002 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:32:14.395678 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:32:14.395705 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:32:16.929785 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:32:16.940545 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:32:16.940609 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:32:16.965350 1446402 cri.go:96] found id: ""
	I1222 00:32:16.965365 1446402 logs.go:282] 0 containers: []
	W1222 00:32:16.965372 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:32:16.965378 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:32:16.965441 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:32:17.001431 1446402 cri.go:96] found id: ""
	I1222 00:32:17.001447 1446402 logs.go:282] 0 containers: []
	W1222 00:32:17.001455 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:32:17.001461 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:32:17.001530 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:32:17.045444 1446402 cri.go:96] found id: ""
	I1222 00:32:17.045459 1446402 logs.go:282] 0 containers: []
	W1222 00:32:17.045466 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:32:17.045472 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:32:17.045531 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:32:17.080407 1446402 cri.go:96] found id: ""
	I1222 00:32:17.080422 1446402 logs.go:282] 0 containers: []
	W1222 00:32:17.080429 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:32:17.080435 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:32:17.080500 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:32:17.107785 1446402 cri.go:96] found id: ""
	I1222 00:32:17.107799 1446402 logs.go:282] 0 containers: []
	W1222 00:32:17.107806 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:32:17.107812 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:32:17.107874 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:32:17.133084 1446402 cri.go:96] found id: ""
	I1222 00:32:17.133099 1446402 logs.go:282] 0 containers: []
	W1222 00:32:17.133106 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:32:17.133112 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:32:17.133170 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:32:17.162200 1446402 cri.go:96] found id: ""
	I1222 00:32:17.162215 1446402 logs.go:282] 0 containers: []
	W1222 00:32:17.162222 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:32:17.162232 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:32:17.162243 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:32:17.220080 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:32:17.220098 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:32:17.235955 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:32:17.235971 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:32:17.302399 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:32:17.293671   16241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:17.294273   16241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:17.296022   16241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:17.296566   16241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:17.298287   16241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:32:17.293671   16241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:17.294273   16241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:17.296022   16241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:17.296566   16241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:17.298287   16241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:32:17.302410 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:32:17.302420 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:32:17.365559 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:32:17.365578 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:32:19.896945 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:32:19.907830 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:32:19.907900 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:32:19.933463 1446402 cri.go:96] found id: ""
	I1222 00:32:19.933478 1446402 logs.go:282] 0 containers: []
	W1222 00:32:19.933485 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:32:19.933490 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:32:19.933556 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:32:19.958969 1446402 cri.go:96] found id: ""
	I1222 00:32:19.958983 1446402 logs.go:282] 0 containers: []
	W1222 00:32:19.958990 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:32:19.958996 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:32:19.959057 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:32:19.984725 1446402 cri.go:96] found id: ""
	I1222 00:32:19.984740 1446402 logs.go:282] 0 containers: []
	W1222 00:32:19.984748 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:32:19.984753 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:32:19.984819 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:32:20.030303 1446402 cri.go:96] found id: ""
	I1222 00:32:20.030318 1446402 logs.go:282] 0 containers: []
	W1222 00:32:20.030326 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:32:20.030332 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:32:20.030400 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:32:20.067239 1446402 cri.go:96] found id: ""
	I1222 00:32:20.067254 1446402 logs.go:282] 0 containers: []
	W1222 00:32:20.067262 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:32:20.067268 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:32:20.067336 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:32:20.094147 1446402 cri.go:96] found id: ""
	I1222 00:32:20.094161 1446402 logs.go:282] 0 containers: []
	W1222 00:32:20.094169 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:32:20.094174 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:32:20.094236 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:32:20.120347 1446402 cri.go:96] found id: ""
	I1222 00:32:20.120361 1446402 logs.go:282] 0 containers: []
	W1222 00:32:20.120369 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:32:20.120377 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:32:20.120387 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:32:20.192596 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:32:20.183539   16339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:20.184471   16339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:20.186253   16339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:20.186801   16339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:20.188449   16339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:32:20.183539   16339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:20.184471   16339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:20.186253   16339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:20.186801   16339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:20.188449   16339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:32:20.192608 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:32:20.192620 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:32:20.255011 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:32:20.255031 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:32:20.288327 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:32:20.288344 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:32:20.347178 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:32:20.347196 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:32:22.863692 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:32:22.873845 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:32:22.873915 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:32:22.898717 1446402 cri.go:96] found id: ""
	I1222 00:32:22.898737 1446402 logs.go:282] 0 containers: []
	W1222 00:32:22.898744 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:32:22.898749 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:32:22.898808 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:32:22.923719 1446402 cri.go:96] found id: ""
	I1222 00:32:22.923734 1446402 logs.go:282] 0 containers: []
	W1222 00:32:22.923741 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:32:22.923746 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:32:22.923806 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:32:22.953819 1446402 cri.go:96] found id: ""
	I1222 00:32:22.953834 1446402 logs.go:282] 0 containers: []
	W1222 00:32:22.953841 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:32:22.953847 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:32:22.953908 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:32:22.977769 1446402 cri.go:96] found id: ""
	I1222 00:32:22.977783 1446402 logs.go:282] 0 containers: []
	W1222 00:32:22.977791 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:32:22.977796 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:32:22.977858 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:32:23.011333 1446402 cri.go:96] found id: ""
	I1222 00:32:23.011348 1446402 logs.go:282] 0 containers: []
	W1222 00:32:23.011355 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:32:23.011361 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:32:23.011426 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:32:23.040887 1446402 cri.go:96] found id: ""
	I1222 00:32:23.040900 1446402 logs.go:282] 0 containers: []
	W1222 00:32:23.040907 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:32:23.040913 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:32:23.040973 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:32:23.070583 1446402 cri.go:96] found id: ""
	I1222 00:32:23.070597 1446402 logs.go:282] 0 containers: []
	W1222 00:32:23.070604 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:32:23.070612 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:32:23.070622 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:32:23.087115 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:32:23.087132 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:32:23.152903 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:32:23.144713   16447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:23.145341   16447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:23.146893   16447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:23.147470   16447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:23.149031   16447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:32:23.144713   16447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:23.145341   16447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:23.146893   16447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:23.147470   16447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:23.149031   16447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:32:23.152913 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:32:23.152924 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:32:23.215824 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:32:23.215846 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:32:23.249147 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:32:23.249175 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:32:25.810217 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:32:25.820952 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:32:25.821015 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:32:25.847989 1446402 cri.go:96] found id: ""
	I1222 00:32:25.848004 1446402 logs.go:282] 0 containers: []
	W1222 00:32:25.848011 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:32:25.848016 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:32:25.848091 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:32:25.877243 1446402 cri.go:96] found id: ""
	I1222 00:32:25.877258 1446402 logs.go:282] 0 containers: []
	W1222 00:32:25.877265 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:32:25.877271 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:32:25.877332 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:32:25.902255 1446402 cri.go:96] found id: ""
	I1222 00:32:25.902271 1446402 logs.go:282] 0 containers: []
	W1222 00:32:25.902278 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:32:25.902283 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:32:25.902344 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:32:25.927468 1446402 cri.go:96] found id: ""
	I1222 00:32:25.927482 1446402 logs.go:282] 0 containers: []
	W1222 00:32:25.927489 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:32:25.927495 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:32:25.927559 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:32:25.957558 1446402 cri.go:96] found id: ""
	I1222 00:32:25.957571 1446402 logs.go:282] 0 containers: []
	W1222 00:32:25.957578 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:32:25.957583 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:32:25.957644 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:32:25.982483 1446402 cri.go:96] found id: ""
	I1222 00:32:25.982509 1446402 logs.go:282] 0 containers: []
	W1222 00:32:25.982517 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:32:25.982523 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:32:25.982599 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:32:26.024676 1446402 cri.go:96] found id: ""
	I1222 00:32:26.024691 1446402 logs.go:282] 0 containers: []
	W1222 00:32:26.024698 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:32:26.024706 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:32:26.024724 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:32:26.087946 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:32:26.087968 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:32:26.105041 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:32:26.105066 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:32:26.171303 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:32:26.162481   16554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:26.163042   16554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:26.164767   16554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:26.165348   16554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:26.166856   16554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:32:26.162481   16554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:26.163042   16554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:26.164767   16554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:26.165348   16554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:26.166856   16554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:32:26.171313 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:32:26.171324 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:32:26.239046 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:32:26.239065 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:32:28.769012 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:32:28.779505 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:32:28.779566 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:32:28.804277 1446402 cri.go:96] found id: ""
	I1222 00:32:28.804291 1446402 logs.go:282] 0 containers: []
	W1222 00:32:28.804298 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:32:28.804303 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:32:28.804364 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:32:28.831914 1446402 cri.go:96] found id: ""
	I1222 00:32:28.831927 1446402 logs.go:282] 0 containers: []
	W1222 00:32:28.831935 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:32:28.831940 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:32:28.831999 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:32:28.858930 1446402 cri.go:96] found id: ""
	I1222 00:32:28.858951 1446402 logs.go:282] 0 containers: []
	W1222 00:32:28.858959 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:32:28.858964 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:32:28.859026 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:32:28.884503 1446402 cri.go:96] found id: ""
	I1222 00:32:28.884517 1446402 logs.go:282] 0 containers: []
	W1222 00:32:28.884524 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:32:28.884529 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:32:28.884588 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:32:28.908385 1446402 cri.go:96] found id: ""
	I1222 00:32:28.908399 1446402 logs.go:282] 0 containers: []
	W1222 00:32:28.908406 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:32:28.908412 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:32:28.908471 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:32:28.932216 1446402 cri.go:96] found id: ""
	I1222 00:32:28.932231 1446402 logs.go:282] 0 containers: []
	W1222 00:32:28.932238 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:32:28.932243 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:32:28.932318 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:32:28.960692 1446402 cri.go:96] found id: ""
	I1222 00:32:28.960706 1446402 logs.go:282] 0 containers: []
	W1222 00:32:28.960714 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:32:28.960721 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:32:28.960732 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:32:28.991268 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:32:28.991284 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:32:29.051794 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:32:29.051812 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:32:29.076793 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:32:29.076809 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:32:29.140856 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:32:29.132991   16669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:29.133597   16669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:29.135220   16669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:29.135674   16669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:29.137107   16669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:32:29.132991   16669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:29.133597   16669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:29.135220   16669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:29.135674   16669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:29.137107   16669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:32:29.140866 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:32:29.140877 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:32:31.704016 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:32:31.714529 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:32:31.714593 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:32:31.739665 1446402 cri.go:96] found id: ""
	I1222 00:32:31.739679 1446402 logs.go:282] 0 containers: []
	W1222 00:32:31.739687 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:32:31.739693 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:32:31.739753 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:32:31.764377 1446402 cri.go:96] found id: ""
	I1222 00:32:31.764391 1446402 logs.go:282] 0 containers: []
	W1222 00:32:31.764399 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:32:31.764404 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:32:31.764465 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:32:31.793617 1446402 cri.go:96] found id: ""
	I1222 00:32:31.793631 1446402 logs.go:282] 0 containers: []
	W1222 00:32:31.793638 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:32:31.793644 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:32:31.793709 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:32:31.818025 1446402 cri.go:96] found id: ""
	I1222 00:32:31.818040 1446402 logs.go:282] 0 containers: []
	W1222 00:32:31.818047 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:32:31.818055 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:32:31.818145 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:32:31.848262 1446402 cri.go:96] found id: ""
	I1222 00:32:31.848277 1446402 logs.go:282] 0 containers: []
	W1222 00:32:31.848285 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:32:31.848293 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:32:31.848357 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:32:31.873649 1446402 cri.go:96] found id: ""
	I1222 00:32:31.873663 1446402 logs.go:282] 0 containers: []
	W1222 00:32:31.873670 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:32:31.873676 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:32:31.873739 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:32:31.898375 1446402 cri.go:96] found id: ""
	I1222 00:32:31.898390 1446402 logs.go:282] 0 containers: []
	W1222 00:32:31.898397 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:32:31.898404 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:32:31.898416 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:32:31.955541 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:32:31.955560 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:32:31.971557 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:32:31.971574 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:32:32.067449 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:32:32.057015   16758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:32.057465   16758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:32.059124   16758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:32.060199   16758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:32.063114   16758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:32:32.057015   16758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:32.057465   16758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:32.059124   16758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:32.060199   16758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:32.063114   16758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:32:32.067459 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:32:32.067469 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:32:32.129846 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:32:32.129865 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:32:34.659453 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:32:34.669625 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:32:34.669685 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:32:34.696885 1446402 cri.go:96] found id: ""
	I1222 00:32:34.696900 1446402 logs.go:282] 0 containers: []
	W1222 00:32:34.696907 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:32:34.696912 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:32:34.696972 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:32:34.721026 1446402 cri.go:96] found id: ""
	I1222 00:32:34.721050 1446402 logs.go:282] 0 containers: []
	W1222 00:32:34.721058 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:32:34.721063 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:32:34.721133 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:32:34.745654 1446402 cri.go:96] found id: ""
	I1222 00:32:34.745669 1446402 logs.go:282] 0 containers: []
	W1222 00:32:34.745687 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:32:34.745692 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:32:34.745753 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:32:34.771407 1446402 cri.go:96] found id: ""
	I1222 00:32:34.771422 1446402 logs.go:282] 0 containers: []
	W1222 00:32:34.771429 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:32:34.771434 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:32:34.771502 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:32:34.795734 1446402 cri.go:96] found id: ""
	I1222 00:32:34.795749 1446402 logs.go:282] 0 containers: []
	W1222 00:32:34.795756 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:32:34.795761 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:32:34.795821 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:32:34.824632 1446402 cri.go:96] found id: ""
	I1222 00:32:34.824647 1446402 logs.go:282] 0 containers: []
	W1222 00:32:34.824664 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:32:34.824670 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:32:34.824737 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:32:34.850691 1446402 cri.go:96] found id: ""
	I1222 00:32:34.850705 1446402 logs.go:282] 0 containers: []
	W1222 00:32:34.850713 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:32:34.850721 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:32:34.850732 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:32:34.923721 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:32:34.914435   16859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:34.915282   16859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:34.916952   16859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:34.917512   16859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:34.919258   16859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:32:34.914435   16859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:34.915282   16859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:34.916952   16859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:34.917512   16859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:34.919258   16859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:32:34.923732 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:32:34.923743 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:32:34.988429 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:32:34.988447 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:32:35.032884 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:32:35.032901 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:32:35.094822 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:32:35.094842 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:32:37.611964 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:32:37.625103 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:32:37.625168 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:32:37.652713 1446402 cri.go:96] found id: ""
	I1222 00:32:37.652727 1446402 logs.go:282] 0 containers: []
	W1222 00:32:37.652734 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:32:37.652740 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:32:37.652805 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:32:37.677907 1446402 cri.go:96] found id: ""
	I1222 00:32:37.677921 1446402 logs.go:282] 0 containers: []
	W1222 00:32:37.677928 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:32:37.677934 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:32:37.677996 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:32:37.706882 1446402 cri.go:96] found id: ""
	I1222 00:32:37.706901 1446402 logs.go:282] 0 containers: []
	W1222 00:32:37.706909 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:32:37.706914 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:32:37.706973 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:32:37.734381 1446402 cri.go:96] found id: ""
	I1222 00:32:37.734396 1446402 logs.go:282] 0 containers: []
	W1222 00:32:37.734403 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:32:37.734408 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:32:37.734468 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:32:37.763444 1446402 cri.go:96] found id: ""
	I1222 00:32:37.763464 1446402 logs.go:282] 0 containers: []
	W1222 00:32:37.763483 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:32:37.763489 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:32:37.763559 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:32:37.789695 1446402 cri.go:96] found id: ""
	I1222 00:32:37.789718 1446402 logs.go:282] 0 containers: []
	W1222 00:32:37.789726 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:32:37.789732 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:32:37.789805 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:32:37.818949 1446402 cri.go:96] found id: ""
	I1222 00:32:37.818963 1446402 logs.go:282] 0 containers: []
	W1222 00:32:37.818970 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:32:37.818977 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:32:37.818989 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:32:37.886829 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:32:37.877811   16963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:37.878764   16963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:37.880313   16963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:37.880782   16963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:37.882319   16963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:32:37.877811   16963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:37.878764   16963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:37.880313   16963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:37.880782   16963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:37.882319   16963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:32:37.886840 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:32:37.886850 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:32:37.953234 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:32:37.953253 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:32:37.982264 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:32:37.982280 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:32:38.049773 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:32:38.049792 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:32:40.567633 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:32:40.577940 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:32:40.578000 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:32:40.602023 1446402 cri.go:96] found id: ""
	I1222 00:32:40.602038 1446402 logs.go:282] 0 containers: []
	W1222 00:32:40.602045 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:32:40.602051 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:32:40.602145 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:32:40.630778 1446402 cri.go:96] found id: ""
	I1222 00:32:40.630802 1446402 logs.go:282] 0 containers: []
	W1222 00:32:40.630810 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:32:40.630816 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:32:40.630877 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:32:40.658578 1446402 cri.go:96] found id: ""
	I1222 00:32:40.658592 1446402 logs.go:282] 0 containers: []
	W1222 00:32:40.658599 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:32:40.658605 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:32:40.658669 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:32:40.686369 1446402 cri.go:96] found id: ""
	I1222 00:32:40.686384 1446402 logs.go:282] 0 containers: []
	W1222 00:32:40.686393 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:32:40.686399 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:32:40.686466 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:32:40.712486 1446402 cri.go:96] found id: ""
	I1222 00:32:40.712501 1446402 logs.go:282] 0 containers: []
	W1222 00:32:40.712509 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:32:40.712514 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:32:40.712580 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:32:40.744516 1446402 cri.go:96] found id: ""
	I1222 00:32:40.744531 1446402 logs.go:282] 0 containers: []
	W1222 00:32:40.744538 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:32:40.744544 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:32:40.744609 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:32:40.770724 1446402 cri.go:96] found id: ""
	I1222 00:32:40.770738 1446402 logs.go:282] 0 containers: []
	W1222 00:32:40.770745 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:32:40.770754 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:32:40.770766 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:32:40.787581 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:32:40.787598 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:32:40.853257 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:32:40.844118   17072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:40.845204   17072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:40.846833   17072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:40.847402   17072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:40.849021   17072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:32:40.844118   17072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:40.845204   17072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:40.846833   17072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:40.847402   17072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:32:40.849021   17072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:32:40.853267 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:32:40.853279 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:32:40.918705 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:32:40.918728 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 00:32:40.947006 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:32:40.947022 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:32:43.505746 1446402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:32:43.515847 1446402 kubeadm.go:602] duration metric: took 4m1.800425441s to restartPrimaryControlPlane
	W1222 00:32:43.515910 1446402 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1222 00:32:43.515983 1446402 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1222 00:32:43.923830 1446402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 00:32:43.937721 1446402 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1222 00:32:43.945799 1446402 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1222 00:32:43.945856 1446402 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 00:32:43.953730 1446402 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1222 00:32:43.953738 1446402 kubeadm.go:158] found existing configuration files:
	
	I1222 00:32:43.953790 1446402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1222 00:32:43.962117 1446402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1222 00:32:43.962172 1446402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1222 00:32:43.969797 1446402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1222 00:32:43.977738 1446402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1222 00:32:43.977798 1446402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 00:32:43.986214 1446402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1222 00:32:43.994326 1446402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1222 00:32:43.994386 1446402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 00:32:44.004154 1446402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1222 00:32:44.013730 1446402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1222 00:32:44.013800 1446402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 00:32:44.022121 1446402 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1222 00:32:44.061736 1446402 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1222 00:32:44.061785 1446402 kubeadm.go:319] [preflight] Running pre-flight checks
	I1222 00:32:44.140713 1446402 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1222 00:32:44.140778 1446402 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1222 00:32:44.140818 1446402 kubeadm.go:319] OS: Linux
	I1222 00:32:44.140862 1446402 kubeadm.go:319] CGROUPS_CPU: enabled
	I1222 00:32:44.140909 1446402 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1222 00:32:44.140955 1446402 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1222 00:32:44.141002 1446402 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1222 00:32:44.141048 1446402 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1222 00:32:44.141095 1446402 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1222 00:32:44.141140 1446402 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1222 00:32:44.141187 1446402 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1222 00:32:44.141232 1446402 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1222 00:32:44.208774 1446402 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1222 00:32:44.208878 1446402 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1222 00:32:44.208966 1446402 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1222 00:32:44.214899 1446402 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1222 00:32:44.218610 1446402 out.go:252]   - Generating certificates and keys ...
	I1222 00:32:44.218748 1446402 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1222 00:32:44.218821 1446402 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1222 00:32:44.218895 1446402 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1222 00:32:44.218955 1446402 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1222 00:32:44.219024 1446402 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1222 00:32:44.219076 1446402 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1222 00:32:44.219138 1446402 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1222 00:32:44.219198 1446402 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1222 00:32:44.219270 1446402 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1222 00:32:44.219343 1446402 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1222 00:32:44.219380 1446402 kubeadm.go:319] [certs] Using the existing "sa" key
	I1222 00:32:44.219458 1446402 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1222 00:32:44.443111 1446402 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1222 00:32:44.602435 1446402 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1222 00:32:44.699769 1446402 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1222 00:32:44.991502 1446402 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1222 00:32:45.160573 1446402 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1222 00:32:45.170594 1446402 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1222 00:32:45.170674 1446402 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1222 00:32:45.173883 1446402 out.go:252]   - Booting up control plane ...
	I1222 00:32:45.174024 1446402 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1222 00:32:45.174124 1446402 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1222 00:32:45.175745 1446402 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1222 00:32:45.208642 1446402 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1222 00:32:45.208749 1446402 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1222 00:32:45.228521 1446402 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1222 00:32:45.228620 1446402 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1222 00:32:45.228659 1446402 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1222 00:32:45.414555 1446402 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1222 00:32:45.414668 1446402 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1222 00:36:45.414312 1446402 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00033138s
	I1222 00:36:45.414339 1446402 kubeadm.go:319] 
	I1222 00:36:45.414437 1446402 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1222 00:36:45.414497 1446402 kubeadm.go:319] 	- The kubelet is not running
	I1222 00:36:45.414614 1446402 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1222 00:36:45.414622 1446402 kubeadm.go:319] 
	I1222 00:36:45.414721 1446402 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1222 00:36:45.414751 1446402 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1222 00:36:45.414780 1446402 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1222 00:36:45.414783 1446402 kubeadm.go:319] 
	I1222 00:36:45.419351 1446402 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1222 00:36:45.419863 1446402 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1222 00:36:45.420008 1446402 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1222 00:36:45.420300 1446402 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1222 00:36:45.420306 1446402 kubeadm.go:319] 
	I1222 00:36:45.420408 1446402 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1222 00:36:45.420558 1446402 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00033138s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1222 00:36:45.420656 1446402 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1222 00:36:45.827625 1446402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 00:36:45.841758 1446402 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1222 00:36:45.841815 1446402 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 00:36:45.850297 1446402 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1222 00:36:45.850306 1446402 kubeadm.go:158] found existing configuration files:
	
	I1222 00:36:45.850362 1446402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1222 00:36:45.858548 1446402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1222 00:36:45.858613 1446402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1222 00:36:45.866403 1446402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1222 00:36:45.875159 1446402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1222 00:36:45.875216 1446402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 00:36:45.883092 1446402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1222 00:36:45.891274 1446402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1222 00:36:45.891330 1446402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 00:36:45.899439 1446402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1222 00:36:45.907618 1446402 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1222 00:36:45.907680 1446402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 00:36:45.915873 1446402 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1222 00:36:45.954554 1446402 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1222 00:36:45.954640 1446402 kubeadm.go:319] [preflight] Running pre-flight checks
	I1222 00:36:46.034225 1446402 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1222 00:36:46.034294 1446402 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1222 00:36:46.034329 1446402 kubeadm.go:319] OS: Linux
	I1222 00:36:46.034372 1446402 kubeadm.go:319] CGROUPS_CPU: enabled
	I1222 00:36:46.034419 1446402 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1222 00:36:46.034466 1446402 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1222 00:36:46.034512 1446402 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1222 00:36:46.034571 1446402 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1222 00:36:46.034626 1446402 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1222 00:36:46.034679 1446402 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1222 00:36:46.034746 1446402 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1222 00:36:46.034795 1446402 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1222 00:36:46.102483 1446402 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1222 00:36:46.102587 1446402 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1222 00:36:46.102678 1446402 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1222 00:36:46.110548 1446402 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1222 00:36:46.114145 1446402 out.go:252]   - Generating certificates and keys ...
	I1222 00:36:46.114232 1446402 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1222 00:36:46.114297 1446402 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1222 00:36:46.114378 1446402 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1222 00:36:46.114438 1446402 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1222 00:36:46.114552 1446402 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1222 00:36:46.114617 1446402 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1222 00:36:46.114681 1446402 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1222 00:36:46.114756 1446402 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1222 00:36:46.114832 1446402 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1222 00:36:46.114915 1446402 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1222 00:36:46.114959 1446402 kubeadm.go:319] [certs] Using the existing "sa" key
	I1222 00:36:46.115024 1446402 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1222 00:36:46.590004 1446402 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1222 00:36:46.981109 1446402 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1222 00:36:47.331562 1446402 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1222 00:36:47.513275 1446402 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1222 00:36:48.017649 1446402 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1222 00:36:48.018361 1446402 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1222 00:36:48.020999 1446402 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1222 00:36:48.024119 1446402 out.go:252]   - Booting up control plane ...
	I1222 00:36:48.024221 1446402 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1222 00:36:48.024298 1446402 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1222 00:36:48.024363 1446402 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1222 00:36:48.046779 1446402 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1222 00:36:48.047056 1446402 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1222 00:36:48.054716 1446402 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1222 00:36:48.055076 1446402 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1222 00:36:48.055127 1446402 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1222 00:36:48.190129 1446402 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1222 00:36:48.190242 1446402 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1222 00:40:48.190377 1446402 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000238043s
	I1222 00:40:48.190402 1446402 kubeadm.go:319] 
	I1222 00:40:48.190458 1446402 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1222 00:40:48.190495 1446402 kubeadm.go:319] 	- The kubelet is not running
	I1222 00:40:48.190599 1446402 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1222 00:40:48.190604 1446402 kubeadm.go:319] 
	I1222 00:40:48.190706 1446402 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1222 00:40:48.190737 1446402 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1222 00:40:48.190766 1446402 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1222 00:40:48.190769 1446402 kubeadm.go:319] 
	I1222 00:40:48.196227 1446402 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1222 00:40:48.196675 1446402 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1222 00:40:48.196785 1446402 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1222 00:40:48.197020 1446402 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1222 00:40:48.197025 1446402 kubeadm.go:319] 
	I1222 00:40:48.197092 1446402 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1222 00:40:48.197152 1446402 kubeadm.go:403] duration metric: took 12m6.51958097s to StartCluster
	I1222 00:40:48.197184 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 00:40:48.197246 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 00:40:48.222444 1446402 cri.go:96] found id: ""
	I1222 00:40:48.222459 1446402 logs.go:282] 0 containers: []
	W1222 00:40:48.222466 1446402 logs.go:284] No container was found matching "kube-apiserver"
	I1222 00:40:48.222472 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 00:40:48.222536 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 00:40:48.256342 1446402 cri.go:96] found id: ""
	I1222 00:40:48.256356 1446402 logs.go:282] 0 containers: []
	W1222 00:40:48.256363 1446402 logs.go:284] No container was found matching "etcd"
	I1222 00:40:48.256368 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 00:40:48.256430 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 00:40:48.285108 1446402 cri.go:96] found id: ""
	I1222 00:40:48.285122 1446402 logs.go:282] 0 containers: []
	W1222 00:40:48.285129 1446402 logs.go:284] No container was found matching "coredns"
	I1222 00:40:48.285135 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 00:40:48.285196 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 00:40:48.317753 1446402 cri.go:96] found id: ""
	I1222 00:40:48.317768 1446402 logs.go:282] 0 containers: []
	W1222 00:40:48.317775 1446402 logs.go:284] No container was found matching "kube-scheduler"
	I1222 00:40:48.317780 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 00:40:48.317842 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 00:40:48.347674 1446402 cri.go:96] found id: ""
	I1222 00:40:48.347689 1446402 logs.go:282] 0 containers: []
	W1222 00:40:48.347696 1446402 logs.go:284] No container was found matching "kube-proxy"
	I1222 00:40:48.347701 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 00:40:48.347765 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 00:40:48.372255 1446402 cri.go:96] found id: ""
	I1222 00:40:48.372268 1446402 logs.go:282] 0 containers: []
	W1222 00:40:48.372275 1446402 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 00:40:48.372281 1446402 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 00:40:48.372339 1446402 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 00:40:48.396691 1446402 cri.go:96] found id: ""
	I1222 00:40:48.396705 1446402 logs.go:282] 0 containers: []
	W1222 00:40:48.396713 1446402 logs.go:284] No container was found matching "kindnet"
	I1222 00:40:48.396725 1446402 logs.go:123] Gathering logs for kubelet ...
	I1222 00:40:48.396735 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 00:40:48.455513 1446402 logs.go:123] Gathering logs for dmesg ...
	I1222 00:40:48.455533 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 00:40:48.471680 1446402 logs.go:123] Gathering logs for describe nodes ...
	I1222 00:40:48.471697 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 00:40:48.541459 1446402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:40:48.532149   20896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:40:48.532736   20896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:40:48.534508   20896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:40:48.535199   20896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:40:48.536956   20896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 00:40:48.532149   20896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:40:48.532736   20896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:40:48.534508   20896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:40:48.535199   20896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:40:48.536956   20896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 00:40:48.541473 1446402 logs.go:123] Gathering logs for containerd ...
	I1222 00:40:48.541483 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 00:40:48.603413 1446402 logs.go:123] Gathering logs for container status ...
	I1222 00:40:48.603432 1446402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 00:40:48.631201 1446402 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000238043s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1222 00:40:48.631242 1446402 out.go:285] * 
	W1222 00:40:48.631304 1446402 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000238043s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1222 00:40:48.631321 1446402 out.go:285] * 
	W1222 00:40:48.633603 1446402 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1222 00:40:48.639700 1446402 out.go:203] 
	W1222 00:40:48.642575 1446402 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000238043s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1222 00:40:48.642620 1446402 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1222 00:40:48.642642 1446402 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1222 00:40:48.645844 1446402 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.248656814Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.248726812Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.248818752Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.248887126Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.248959487Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.249024218Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.249082229Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.249153910Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.249223252Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.249308890Z" level=info msg="Connect containerd service"
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.249702304Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.252215911Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.272726589Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.273135610Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.272971801Z" level=info msg="Start subscribing containerd event"
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.273361942Z" level=info msg="Start recovering state"
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.330860881Z" level=info msg="Start event monitor"
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.331048714Z" level=info msg="Start cni network conf syncer for default"
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.331117121Z" level=info msg="Start streaming server"
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.331184855Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.331242062Z" level=info msg="runtime interface starting up..."
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.331301705Z" level=info msg="starting plugins..."
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.331364582Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 22 00:28:40 functional-973657 containerd[9735]: time="2025-12-22T00:28:40.331577047Z" level=info msg="containerd successfully booted in 0.110567s"
	Dec 22 00:28:40 functional-973657 systemd[1]: Started containerd.service - containerd container runtime.
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:43:04.646225   22542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:43:04.646656   22542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:43:04.648196   22542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:43:04.648555   22542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:43:04.650041   22542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec21 21:36] overlayfs: idmapped layers are currently not supported
	[ +34.220408] overlayfs: idmapped layers are currently not supported
	[Dec21 21:37] overlayfs: idmapped layers are currently not supported
	[Dec21 21:38] overlayfs: idmapped layers are currently not supported
	[Dec21 21:39] overlayfs: idmapped layers are currently not supported
	[ +42.728151] overlayfs: idmapped layers are currently not supported
	[Dec21 21:40] overlayfs: idmapped layers are currently not supported
	[Dec21 21:42] overlayfs: idmapped layers are currently not supported
	[Dec21 21:43] overlayfs: idmapped layers are currently not supported
	[Dec21 22:01] overlayfs: idmapped layers are currently not supported
	[Dec21 22:03] overlayfs: idmapped layers are currently not supported
	[Dec21 22:04] overlayfs: idmapped layers are currently not supported
	[Dec21 22:06] overlayfs: idmapped layers are currently not supported
	[Dec21 22:07] overlayfs: idmapped layers are currently not supported
	[Dec21 22:09] kauditd_printk_skb: 8 callbacks suppressed
	[Dec21 22:19] FS-Cache: Duplicate cookie detected
	[  +0.000799] FS-Cache: O-cookie c=000001b7 [p=00000002 fl=222 nc=0 na=1]
	[  +0.000997] FS-Cache: O-cookie d=000000006644c6a1{9P.session} n=0000000059d48210
	[  +0.001156] FS-Cache: O-key=[10] '34333231303139373837'
	[  +0.000780] FS-Cache: N-cookie c=000001b8 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000956] FS-Cache: N-cookie d=000000006644c6a1{9P.session} n=000000007a8030ee
	[  +0.001187] FS-Cache: N-key=[10] '34333231303139373837'
	[Dec22 00:03] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 00:43:04 up 1 day,  7:25,  0 user,  load average: 0.28, 0.24, 0.48
	Linux functional-973657 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 22 00:43:01 functional-973657 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 00:43:01 functional-973657 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 497.
	Dec 22 00:43:01 functional-973657 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:43:01 functional-973657 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:43:02 functional-973657 kubelet[22430]: E1222 00:43:02.038531   22430 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 00:43:02 functional-973657 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 00:43:02 functional-973657 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 00:43:02 functional-973657 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 498.
	Dec 22 00:43:02 functional-973657 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:43:02 functional-973657 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:43:02 functional-973657 kubelet[22435]: E1222 00:43:02.789667   22435 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 00:43:02 functional-973657 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 00:43:02 functional-973657 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 00:43:03 functional-973657 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 499.
	Dec 22 00:43:03 functional-973657 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:43:03 functional-973657 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:43:03 functional-973657 kubelet[22440]: E1222 00:43:03.547969   22440 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 00:43:03 functional-973657 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 00:43:03 functional-973657 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 00:43:04 functional-973657 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 500.
	Dec 22 00:43:04 functional-973657 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:43:04 functional-973657 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:43:04 functional-973657 kubelet[22461]: E1222 00:43:04.296342   22461 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 00:43:04 functional-973657 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 00:43:04 functional-973657 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-973657 -n functional-973657
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-973657 -n functional-973657: exit status 2 (324.687755ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-973657" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect (2.45s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim (241.65s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1222 00:41:06.873190 1396864 retry.go:84] will retry after 4.1s: Temporary Error: Get "http://10.108.155.82": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1222 00:41:20.992629 1396864 retry.go:84] will retry after 6.2s: Temporary Error: Get "http://10.108.155.82": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
E1222 00:41:29.153564 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/addons-984861/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1222 00:41:37.186180 1396864 retry.go:84] will retry after 4.9s: Temporary Error: Get "http://10.108.155.82": context deadline exceeded (Client.Timeout exceeded while awaiting headers) (duplicate log for 30.3s)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1222 00:41:52.095365 1396864 retry.go:84] will retry after 11.4s: Temporary Error: Get "http://10.108.155.82": context deadline exceeded (Client.Timeout exceeded while awaiting headers) (duplicate log for 45.2s)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1222 00:42:13.486601 1396864 retry.go:84] will retry after 16.4s: Temporary Error: Get "http://10.108.155.82": context deadline exceeded (Client.Timeout exceeded while awaiting headers) (duplicate log for 1m6.6s)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1222 00:42:39.935959 1396864 retry.go:74] will retry after 13s: stuck on same error as above for 1m33.1s...
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
E1222 00:44:32.210385 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/addons-984861/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:50: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: pod "integration-test=storage-provisioner" failed to start within 4m0s: context deadline exceeded ****
functional_test_pvc_test.go:50: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-973657 -n functional-973657
functional_test_pvc_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-973657 -n functional-973657: exit status 2 (327.68947ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
functional_test_pvc_test.go:50: status error: exit status 2 (may be ok)
functional_test_pvc_test.go:50: "functional-973657" apiserver is not running, skipping kubectl commands (state="Stopped")
functional_test_pvc_test.go:51: failed waiting for storage-provisioner: integration-test=storage-provisioner within 4m0s: context deadline exceeded
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-973657
helpers_test.go:244: (dbg) docker inspect functional-973657:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "66803363da2c02a83814dda1c0764d3abdab5acc630ac08f6a997102221d51a1",
	        "Created": "2025-12-22T00:13:58.968084222Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1435135,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-22T00:13:59.032592154Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:065a636b8735485f57df2b02ed6532902f189a9c5dc304ae0ae68a778e1c9b2c",
	        "ResolvConfPath": "/var/lib/docker/containers/66803363da2c02a83814dda1c0764d3abdab5acc630ac08f6a997102221d51a1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/66803363da2c02a83814dda1c0764d3abdab5acc630ac08f6a997102221d51a1/hostname",
	        "HostsPath": "/var/lib/docker/containers/66803363da2c02a83814dda1c0764d3abdab5acc630ac08f6a997102221d51a1/hosts",
	        "LogPath": "/var/lib/docker/containers/66803363da2c02a83814dda1c0764d3abdab5acc630ac08f6a997102221d51a1/66803363da2c02a83814dda1c0764d3abdab5acc630ac08f6a997102221d51a1-json.log",
	        "Name": "/functional-973657",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-973657:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-973657",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "66803363da2c02a83814dda1c0764d3abdab5acc630ac08f6a997102221d51a1",
	                "LowerDir": "/var/lib/docker/overlay2/5e1ad55fc7940958673405b2a5d9d7701d300a0b94ebc0c871b8eb28331634c7-init/diff:/var/lib/docker/overlay2/fdfc0dccbb3b6766b7981a5669907f62e120bbe774767ccbee34e6115374625f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5e1ad55fc7940958673405b2a5d9d7701d300a0b94ebc0c871b8eb28331634c7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5e1ad55fc7940958673405b2a5d9d7701d300a0b94ebc0c871b8eb28331634c7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5e1ad55fc7940958673405b2a5d9d7701d300a0b94ebc0c871b8eb28331634c7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-973657",
	                "Source": "/var/lib/docker/volumes/functional-973657/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-973657",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-973657",
	                "name.minikube.sigs.k8s.io": "functional-973657",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9a7415dc7cc8c69d402ee20ae768f9dea6a8f4f19a78dc21532ae8d42f3e7899",
	            "SandboxKey": "/var/run/docker/netns/9a7415dc7cc8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38390"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38391"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38394"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38392"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38393"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-973657": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ee:06:b1:ad:4a:31",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1e42e4d66b505bfd04d5446c52717be20340997733545e1d4203ef38f80c0dbb",
	                    "EndpointID": "cfb1e4a2d5409c3c16cc85466e1253884fb8124967c81f53a3b06011e2792928",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-973657",
	                        "66803363da2c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-973657 -n functional-973657
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-973657 -n functional-973657: exit status 2 (310.050941ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-973657 image load --daemon kicbase/echo-server:functional-973657 --alsologtostderr                                                                   │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │ 22 Dec 25 00:43 UTC │
	│ image          │ functional-973657 image ls                                                                                                                                      │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │ 22 Dec 25 00:43 UTC │
	│ image          │ functional-973657 image save kicbase/echo-server:functional-973657 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │ 22 Dec 25 00:43 UTC │
	│ image          │ functional-973657 image rm kicbase/echo-server:functional-973657 --alsologtostderr                                                                              │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │ 22 Dec 25 00:43 UTC │
	│ image          │ functional-973657 image ls                                                                                                                                      │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │ 22 Dec 25 00:43 UTC │
	│ image          │ functional-973657 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │ 22 Dec 25 00:43 UTC │
	│ image          │ functional-973657 image ls                                                                                                                                      │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │ 22 Dec 25 00:43 UTC │
	│ image          │ functional-973657 image save --daemon kicbase/echo-server:functional-973657 --alsologtostderr                                                                   │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │ 22 Dec 25 00:43 UTC │
	│ ssh            │ functional-973657 ssh sudo cat /etc/ssl/certs/1396864.pem                                                                                                       │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │ 22 Dec 25 00:43 UTC │
	│ ssh            │ functional-973657 ssh sudo cat /usr/share/ca-certificates/1396864.pem                                                                                           │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │ 22 Dec 25 00:43 UTC │
	│ ssh            │ functional-973657 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                        │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │ 22 Dec 25 00:43 UTC │
	│ ssh            │ functional-973657 ssh sudo cat /etc/ssl/certs/13968642.pem                                                                                                      │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │ 22 Dec 25 00:43 UTC │
	│ ssh            │ functional-973657 ssh sudo cat /usr/share/ca-certificates/13968642.pem                                                                                          │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │ 22 Dec 25 00:43 UTC │
	│ ssh            │ functional-973657 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                        │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │ 22 Dec 25 00:43 UTC │
	│ ssh            │ functional-973657 ssh sudo cat /etc/test/nested/copy/1396864/hosts                                                                                              │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │ 22 Dec 25 00:43 UTC │
	│ image          │ functional-973657 image ls --format short --alsologtostderr                                                                                                     │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │ 22 Dec 25 00:43 UTC │
	│ image          │ functional-973657 image ls --format yaml --alsologtostderr                                                                                                      │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │ 22 Dec 25 00:43 UTC │
	│ ssh            │ functional-973657 ssh pgrep buildkitd                                                                                                                           │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │                     │
	│ image          │ functional-973657 image build -t localhost/my-image:functional-973657 testdata/build --alsologtostderr                                                          │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │ 22 Dec 25 00:43 UTC │
	│ image          │ functional-973657 image ls                                                                                                                                      │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │ 22 Dec 25 00:43 UTC │
	│ image          │ functional-973657 image ls --format json --alsologtostderr                                                                                                      │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │ 22 Dec 25 00:43 UTC │
	│ image          │ functional-973657 image ls --format table --alsologtostderr                                                                                                     │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │ 22 Dec 25 00:43 UTC │
	│ update-context │ functional-973657 update-context --alsologtostderr -v=2                                                                                                         │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │ 22 Dec 25 00:43 UTC │
	│ update-context │ functional-973657 update-context --alsologtostderr -v=2                                                                                                         │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │ 22 Dec 25 00:43 UTC │
	│ update-context │ functional-973657 update-context --alsologtostderr -v=2                                                                                                         │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │ 22 Dec 25 00:43 UTC │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/22 00:43:20
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1222 00:43:20.397088 1463794 out.go:360] Setting OutFile to fd 1 ...
	I1222 00:43:20.397239 1463794 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:43:20.397280 1463794 out.go:374] Setting ErrFile to fd 2...
	I1222 00:43:20.397293 1463794 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:43:20.397557 1463794 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1395000/.minikube/bin
	I1222 00:43:20.397958 1463794 out.go:368] Setting JSON to false
	I1222 00:43:20.398881 1463794 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":113153,"bootTime":1766251047,"procs":162,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1222 00:43:20.398948 1463794 start.go:143] virtualization:  
	I1222 00:43:20.402305 1463794 out.go:179] * [functional-973657] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1222 00:43:20.405324 1463794 out.go:179]   - MINIKUBE_LOCATION=22179
	I1222 00:43:20.405407 1463794 notify.go:221] Checking for updates...
	I1222 00:43:20.411808 1463794 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 00:43:20.414617 1463794 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-1395000/kubeconfig
	I1222 00:43:20.417523 1463794 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1395000/.minikube
	I1222 00:43:20.420394 1463794 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1222 00:43:20.423238 1463794 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 00:43:20.426518 1463794 config.go:182] Loaded profile config "functional-973657": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1222 00:43:20.427149 1463794 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 00:43:20.460983 1463794 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1222 00:43:20.461115 1463794 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 00:43:20.517726 1463794 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 00:43:20.508508606 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 00:43:20.517835 1463794 docker.go:319] overlay module found
	I1222 00:43:20.520954 1463794 out.go:179] * Using the docker driver based on existing profile
	I1222 00:43:20.523767 1463794 start.go:309] selected driver: docker
	I1222 00:43:20.523793 1463794 start.go:928] validating driver "docker" against &{Name:functional-973657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-973657 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 00:43:20.523909 1463794 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 00:43:20.524020 1463794 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 00:43:20.583070 1463794 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 00:43:20.573843603 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 00:43:20.583581 1463794 cni.go:84] Creating CNI manager for ""
	I1222 00:43:20.583648 1463794 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1222 00:43:20.583700 1463794 start.go:353] cluster config:
	{Name:functional-973657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-973657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCo
reDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 00:43:20.586814 1463794 out.go:179] * dry-run validation complete!
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 22 00:43:25 functional-973657 containerd[9735]: time="2025-12-22T00:43:25.434549170Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 22 00:43:25 functional-973657 containerd[9735]: time="2025-12-22T00:43:25.435250411Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-973657\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 22 00:43:26 functional-973657 containerd[9735]: time="2025-12-22T00:43:26.487510389Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-973657\""
	Dec 22 00:43:26 functional-973657 containerd[9735]: time="2025-12-22T00:43:26.491220267Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-973657\""
	Dec 22 00:43:26 functional-973657 containerd[9735]: time="2025-12-22T00:43:26.493354515Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 22 00:43:26 functional-973657 containerd[9735]: time="2025-12-22T00:43:26.502262359Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-973657\" returns successfully"
	Dec 22 00:43:26 functional-973657 containerd[9735]: time="2025-12-22T00:43:26.747182575Z" level=info msg="No images store for sha256:c6f17388b1d2cfe7e7c40b3f156f6044efcb7e9b9096539659dfeaa3551ffd36"
	Dec 22 00:43:26 functional-973657 containerd[9735]: time="2025-12-22T00:43:26.749404882Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-973657\""
	Dec 22 00:43:26 functional-973657 containerd[9735]: time="2025-12-22T00:43:26.757508558Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 22 00:43:26 functional-973657 containerd[9735]: time="2025-12-22T00:43:26.757829939Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-973657\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 22 00:43:27 functional-973657 containerd[9735]: time="2025-12-22T00:43:27.522408033Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-973657\""
	Dec 22 00:43:27 functional-973657 containerd[9735]: time="2025-12-22T00:43:27.524889616Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-973657\""
	Dec 22 00:43:27 functional-973657 containerd[9735]: time="2025-12-22T00:43:27.527049858Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 22 00:43:27 functional-973657 containerd[9735]: time="2025-12-22T00:43:27.537129323Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-973657\" returns successfully"
	Dec 22 00:43:28 functional-973657 containerd[9735]: time="2025-12-22T00:43:28.203089294Z" level=info msg="No images store for sha256:d93e45b06dc74b61b47f92c1f54fc2fe8a1065b0957845cd8ef0a543857fda4c"
	Dec 22 00:43:28 functional-973657 containerd[9735]: time="2025-12-22T00:43:28.205372927Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-973657\""
	Dec 22 00:43:28 functional-973657 containerd[9735]: time="2025-12-22T00:43:28.218697715Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 22 00:43:28 functional-973657 containerd[9735]: time="2025-12-22T00:43:28.219066736Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-973657\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 22 00:43:36 functional-973657 containerd[9735]: time="2025-12-22T00:43:36.114312833Z" level=info msg="connecting to shim 75jq70ohiv1aaia3vyjvi81z8" address="unix:///run/containerd/s/563421ecb92c909312f64f12e0b899d95865785b93dc3603146a0427a274a23c" namespace=k8s.io protocol=ttrpc version=3
	Dec 22 00:43:36 functional-973657 containerd[9735]: time="2025-12-22T00:43:36.195609353Z" level=info msg="shim disconnected" id=75jq70ohiv1aaia3vyjvi81z8 namespace=k8s.io
	Dec 22 00:43:36 functional-973657 containerd[9735]: time="2025-12-22T00:43:36.195657854Z" level=info msg="cleaning up after shim disconnected" id=75jq70ohiv1aaia3vyjvi81z8 namespace=k8s.io
	Dec 22 00:43:36 functional-973657 containerd[9735]: time="2025-12-22T00:43:36.195669637Z" level=info msg="cleaning up dead shim" id=75jq70ohiv1aaia3vyjvi81z8 namespace=k8s.io
	Dec 22 00:43:36 functional-973657 containerd[9735]: time="2025-12-22T00:43:36.495371147Z" level=info msg="ImageCreate event name:\"localhost/my-image:functional-973657\""
	Dec 22 00:43:36 functional-973657 containerd[9735]: time="2025-12-22T00:43:36.501087346Z" level=info msg="ImageCreate event name:\"sha256:233ea769d234c4331d518d7a6819030294f0233ae21105553a0d59ca79c33439\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 22 00:43:36 functional-973657 containerd[9735]: time="2025-12-22T00:43:36.502026309Z" level=info msg="ImageUpdate event name:\"localhost/my-image:functional-973657\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:44:58.447940   25086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:44:58.449019   25086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:44:58.450763   25086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:44:58.451325   25086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:44:58.453035   25086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec21 21:36] overlayfs: idmapped layers are currently not supported
	[ +34.220408] overlayfs: idmapped layers are currently not supported
	[Dec21 21:37] overlayfs: idmapped layers are currently not supported
	[Dec21 21:38] overlayfs: idmapped layers are currently not supported
	[Dec21 21:39] overlayfs: idmapped layers are currently not supported
	[ +42.728151] overlayfs: idmapped layers are currently not supported
	[Dec21 21:40] overlayfs: idmapped layers are currently not supported
	[Dec21 21:42] overlayfs: idmapped layers are currently not supported
	[Dec21 21:43] overlayfs: idmapped layers are currently not supported
	[Dec21 22:01] overlayfs: idmapped layers are currently not supported
	[Dec21 22:03] overlayfs: idmapped layers are currently not supported
	[Dec21 22:04] overlayfs: idmapped layers are currently not supported
	[Dec21 22:06] overlayfs: idmapped layers are currently not supported
	[Dec21 22:07] overlayfs: idmapped layers are currently not supported
	[Dec21 22:09] kauditd_printk_skb: 8 callbacks suppressed
	[Dec21 22:19] FS-Cache: Duplicate cookie detected
	[  +0.000799] FS-Cache: O-cookie c=000001b7 [p=00000002 fl=222 nc=0 na=1]
	[  +0.000997] FS-Cache: O-cookie d=000000006644c6a1{9P.session} n=0000000059d48210
	[  +0.001156] FS-Cache: O-key=[10] '34333231303139373837'
	[  +0.000780] FS-Cache: N-cookie c=000001b8 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000956] FS-Cache: N-cookie d=000000006644c6a1{9P.session} n=000000007a8030ee
	[  +0.001187] FS-Cache: N-key=[10] '34333231303139373837'
	[Dec22 00:03] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 00:44:58 up 1 day,  7:27,  0 user,  load average: 0.36, 0.31, 0.48
	Linux functional-973657 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 22 00:44:55 functional-973657 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 00:44:55 functional-973657 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 649.
	Dec 22 00:44:55 functional-973657 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:44:55 functional-973657 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:44:56 functional-973657 kubelet[24955]: E1222 00:44:56.042223   24955 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 00:44:56 functional-973657 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 00:44:56 functional-973657 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 00:44:56 functional-973657 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 650.
	Dec 22 00:44:56 functional-973657 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:44:56 functional-973657 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:44:56 functional-973657 kubelet[24960]: E1222 00:44:56.783314   24960 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 00:44:56 functional-973657 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 00:44:56 functional-973657 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 00:44:57 functional-973657 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 651.
	Dec 22 00:44:57 functional-973657 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:44:57 functional-973657 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:44:57 functional-973657 kubelet[24979]: E1222 00:44:57.487467   24979 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 00:44:57 functional-973657 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 00:44:57 functional-973657 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 00:44:58 functional-973657 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 652.
	Dec 22 00:44:58 functional-973657 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:44:58 functional-973657 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:44:58 functional-973657 kubelet[25048]: E1222 00:44:58.301167   25048 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 00:44:58 functional-973657 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 00:44:58 functional-973657 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-973657 -n functional-973657
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-973657 -n functional-973657: exit status 2 (303.40296ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-973657" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim (241.65s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels (1.46s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-973657 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:234: (dbg) Non-zero exit: kubectl --context functional-973657 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (59.203409ms)

                                                
                                                
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:236: failed to 'kubectl get nodes' with args "kubectl --context functional-973657 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:242: expected to have label "minikube.k8s.io/commit" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/version" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/name" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/primary" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-973657
helpers_test.go:244: (dbg) docker inspect functional-973657:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "66803363da2c02a83814dda1c0764d3abdab5acc630ac08f6a997102221d51a1",
	        "Created": "2025-12-22T00:13:58.968084222Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1435135,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-22T00:13:59.032592154Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:065a636b8735485f57df2b02ed6532902f189a9c5dc304ae0ae68a778e1c9b2c",
	        "ResolvConfPath": "/var/lib/docker/containers/66803363da2c02a83814dda1c0764d3abdab5acc630ac08f6a997102221d51a1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/66803363da2c02a83814dda1c0764d3abdab5acc630ac08f6a997102221d51a1/hostname",
	        "HostsPath": "/var/lib/docker/containers/66803363da2c02a83814dda1c0764d3abdab5acc630ac08f6a997102221d51a1/hosts",
	        "LogPath": "/var/lib/docker/containers/66803363da2c02a83814dda1c0764d3abdab5acc630ac08f6a997102221d51a1/66803363da2c02a83814dda1c0764d3abdab5acc630ac08f6a997102221d51a1-json.log",
	        "Name": "/functional-973657",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-973657:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-973657",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "66803363da2c02a83814dda1c0764d3abdab5acc630ac08f6a997102221d51a1",
	                "LowerDir": "/var/lib/docker/overlay2/5e1ad55fc7940958673405b2a5d9d7701d300a0b94ebc0c871b8eb28331634c7-init/diff:/var/lib/docker/overlay2/fdfc0dccbb3b6766b7981a5669907f62e120bbe774767ccbee34e6115374625f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5e1ad55fc7940958673405b2a5d9d7701d300a0b94ebc0c871b8eb28331634c7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5e1ad55fc7940958673405b2a5d9d7701d300a0b94ebc0c871b8eb28331634c7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5e1ad55fc7940958673405b2a5d9d7701d300a0b94ebc0c871b8eb28331634c7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-973657",
	                "Source": "/var/lib/docker/volumes/functional-973657/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-973657",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-973657",
	                "name.minikube.sigs.k8s.io": "functional-973657",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9a7415dc7cc8c69d402ee20ae768f9dea6a8f4f19a78dc21532ae8d42f3e7899",
	            "SandboxKey": "/var/run/docker/netns/9a7415dc7cc8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38390"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38391"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38394"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38392"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38393"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-973657": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ee:06:b1:ad:4a:31",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1e42e4d66b505bfd04d5446c52717be20340997733545e1d4203ef38f80c0dbb",
	                    "EndpointID": "cfb1e4a2d5409c3c16cc85466e1253884fb8124967c81f53a3b06011e2792928",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-973657",
	                        "66803363da2c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-973657 -n functional-973657
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-973657 -n functional-973657: exit status 2 (297.653852ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh       │ functional-973657 ssh findmnt -T /mount1                                                                                                                        │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │                     │
	│ mount     │ -p functional-973657 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun2375414462/001:/mount3 --alsologtostderr -v=1                            │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │                     │
	│ ssh       │ functional-973657 ssh findmnt -T /mount1                                                                                                                        │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │ 22 Dec 25 00:43 UTC │
	│ ssh       │ functional-973657 ssh findmnt -T /mount2                                                                                                                        │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │ 22 Dec 25 00:43 UTC │
	│ ssh       │ functional-973657 ssh findmnt -T /mount3                                                                                                                        │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │ 22 Dec 25 00:43 UTC │
	│ mount     │ -p functional-973657 --kill=true                                                                                                                                │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │                     │
	│ start     │ -p functional-973657 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1               │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │                     │
	│ start     │ -p functional-973657 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1               │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │                     │
	│ start     │ -p functional-973657 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1                         │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-973657 --alsologtostderr -v=1                                                                                                  │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │                     │
	│ license   │                                                                                                                                                                 │ minikube          │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │ 22 Dec 25 00:43 UTC │
	│ ssh       │ functional-973657 ssh sudo systemctl is-active docker                                                                                                           │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │                     │
	│ ssh       │ functional-973657 ssh sudo systemctl is-active crio                                                                                                             │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │                     │
	│ image     │ functional-973657 image load --daemon kicbase/echo-server:functional-973657 --alsologtostderr                                                                   │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │ 22 Dec 25 00:43 UTC │
	│ image     │ functional-973657 image ls                                                                                                                                      │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │ 22 Dec 25 00:43 UTC │
	│ image     │ functional-973657 image load --daemon kicbase/echo-server:functional-973657 --alsologtostderr                                                                   │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │ 22 Dec 25 00:43 UTC │
	│ image     │ functional-973657 image ls                                                                                                                                      │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │ 22 Dec 25 00:43 UTC │
	│ image     │ functional-973657 image load --daemon kicbase/echo-server:functional-973657 --alsologtostderr                                                                   │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │ 22 Dec 25 00:43 UTC │
	│ image     │ functional-973657 image ls                                                                                                                                      │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │ 22 Dec 25 00:43 UTC │
	│ image     │ functional-973657 image save kicbase/echo-server:functional-973657 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │ 22 Dec 25 00:43 UTC │
	│ image     │ functional-973657 image rm kicbase/echo-server:functional-973657 --alsologtostderr                                                                              │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │ 22 Dec 25 00:43 UTC │
	│ image     │ functional-973657 image ls                                                                                                                                      │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │ 22 Dec 25 00:43 UTC │
	│ image     │ functional-973657 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │ 22 Dec 25 00:43 UTC │
	│ image     │ functional-973657 image ls                                                                                                                                      │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │ 22 Dec 25 00:43 UTC │
	│ image     │ functional-973657 image save --daemon kicbase/echo-server:functional-973657 --alsologtostderr                                                                   │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:43 UTC │ 22 Dec 25 00:43 UTC │
	└───────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/22 00:43:20
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1222 00:43:20.397088 1463794 out.go:360] Setting OutFile to fd 1 ...
	I1222 00:43:20.397239 1463794 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:43:20.397280 1463794 out.go:374] Setting ErrFile to fd 2...
	I1222 00:43:20.397293 1463794 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:43:20.397557 1463794 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1395000/.minikube/bin
	I1222 00:43:20.397958 1463794 out.go:368] Setting JSON to false
	I1222 00:43:20.398881 1463794 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":113153,"bootTime":1766251047,"procs":162,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1222 00:43:20.398948 1463794 start.go:143] virtualization:  
	I1222 00:43:20.402305 1463794 out.go:179] * [functional-973657] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1222 00:43:20.405324 1463794 out.go:179]   - MINIKUBE_LOCATION=22179
	I1222 00:43:20.405407 1463794 notify.go:221] Checking for updates...
	I1222 00:43:20.411808 1463794 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 00:43:20.414617 1463794 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-1395000/kubeconfig
	I1222 00:43:20.417523 1463794 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1395000/.minikube
	I1222 00:43:20.420394 1463794 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1222 00:43:20.423238 1463794 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 00:43:20.426518 1463794 config.go:182] Loaded profile config "functional-973657": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1222 00:43:20.427149 1463794 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 00:43:20.460983 1463794 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1222 00:43:20.461115 1463794 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 00:43:20.517726 1463794 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 00:43:20.508508606 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 00:43:20.517835 1463794 docker.go:319] overlay module found
	I1222 00:43:20.520954 1463794 out.go:179] * Using the docker driver based on existing profile
	I1222 00:43:20.523767 1463794 start.go:309] selected driver: docker
	I1222 00:43:20.523793 1463794 start.go:928] validating driver "docker" against &{Name:functional-973657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-973657 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 00:43:20.523909 1463794 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 00:43:20.524020 1463794 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 00:43:20.583070 1463794 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 00:43:20.573843603 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 00:43:20.583581 1463794 cni.go:84] Creating CNI manager for ""
	I1222 00:43:20.583648 1463794 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1222 00:43:20.583700 1463794 start.go:353] cluster config:
	{Name:functional-973657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-973657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCo
reDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 00:43:20.586814 1463794 out.go:179] * dry-run validation complete!
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 22 00:43:24 functional-973657 containerd[9735]: time="2025-12-22T00:43:24.356771728Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-973657\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 22 00:43:25 functional-973657 containerd[9735]: time="2025-12-22T00:43:25.162750475Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-973657\""
	Dec 22 00:43:25 functional-973657 containerd[9735]: time="2025-12-22T00:43:25.165506046Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-973657\""
	Dec 22 00:43:25 functional-973657 containerd[9735]: time="2025-12-22T00:43:25.168285937Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 22 00:43:25 functional-973657 containerd[9735]: time="2025-12-22T00:43:25.177271148Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-973657\" returns successfully"
	Dec 22 00:43:25 functional-973657 containerd[9735]: time="2025-12-22T00:43:25.424940125Z" level=info msg="No images store for sha256:c6f17388b1d2cfe7e7c40b3f156f6044efcb7e9b9096539659dfeaa3551ffd36"
	Dec 22 00:43:25 functional-973657 containerd[9735]: time="2025-12-22T00:43:25.427016051Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-973657\""
	Dec 22 00:43:25 functional-973657 containerd[9735]: time="2025-12-22T00:43:25.434549170Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 22 00:43:25 functional-973657 containerd[9735]: time="2025-12-22T00:43:25.435250411Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-973657\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 22 00:43:26 functional-973657 containerd[9735]: time="2025-12-22T00:43:26.487510389Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-973657\""
	Dec 22 00:43:26 functional-973657 containerd[9735]: time="2025-12-22T00:43:26.491220267Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-973657\""
	Dec 22 00:43:26 functional-973657 containerd[9735]: time="2025-12-22T00:43:26.493354515Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 22 00:43:26 functional-973657 containerd[9735]: time="2025-12-22T00:43:26.502262359Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-973657\" returns successfully"
	Dec 22 00:43:26 functional-973657 containerd[9735]: time="2025-12-22T00:43:26.747182575Z" level=info msg="No images store for sha256:c6f17388b1d2cfe7e7c40b3f156f6044efcb7e9b9096539659dfeaa3551ffd36"
	Dec 22 00:43:26 functional-973657 containerd[9735]: time="2025-12-22T00:43:26.749404882Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-973657\""
	Dec 22 00:43:26 functional-973657 containerd[9735]: time="2025-12-22T00:43:26.757508558Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 22 00:43:26 functional-973657 containerd[9735]: time="2025-12-22T00:43:26.757829939Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-973657\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 22 00:43:27 functional-973657 containerd[9735]: time="2025-12-22T00:43:27.522408033Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-973657\""
	Dec 22 00:43:27 functional-973657 containerd[9735]: time="2025-12-22T00:43:27.524889616Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-973657\""
	Dec 22 00:43:27 functional-973657 containerd[9735]: time="2025-12-22T00:43:27.527049858Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 22 00:43:27 functional-973657 containerd[9735]: time="2025-12-22T00:43:27.537129323Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-973657\" returns successfully"
	Dec 22 00:43:28 functional-973657 containerd[9735]: time="2025-12-22T00:43:28.203089294Z" level=info msg="No images store for sha256:d93e45b06dc74b61b47f92c1f54fc2fe8a1065b0957845cd8ef0a543857fda4c"
	Dec 22 00:43:28 functional-973657 containerd[9735]: time="2025-12-22T00:43:28.205372927Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-973657\""
	Dec 22 00:43:28 functional-973657 containerd[9735]: time="2025-12-22T00:43:28.218697715Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 22 00:43:28 functional-973657 containerd[9735]: time="2025-12-22T00:43:28.219066736Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-973657\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 00:43:29.795899   23943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:43:29.796662   23943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:43:29.798282   23943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:43:29.798868   23943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 00:43:29.800423   23943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec21 21:36] overlayfs: idmapped layers are currently not supported
	[ +34.220408] overlayfs: idmapped layers are currently not supported
	[Dec21 21:37] overlayfs: idmapped layers are currently not supported
	[Dec21 21:38] overlayfs: idmapped layers are currently not supported
	[Dec21 21:39] overlayfs: idmapped layers are currently not supported
	[ +42.728151] overlayfs: idmapped layers are currently not supported
	[Dec21 21:40] overlayfs: idmapped layers are currently not supported
	[Dec21 21:42] overlayfs: idmapped layers are currently not supported
	[Dec21 21:43] overlayfs: idmapped layers are currently not supported
	[Dec21 22:01] overlayfs: idmapped layers are currently not supported
	[Dec21 22:03] overlayfs: idmapped layers are currently not supported
	[Dec21 22:04] overlayfs: idmapped layers are currently not supported
	[Dec21 22:06] overlayfs: idmapped layers are currently not supported
	[Dec21 22:07] overlayfs: idmapped layers are currently not supported
	[Dec21 22:09] kauditd_printk_skb: 8 callbacks suppressed
	[Dec21 22:19] FS-Cache: Duplicate cookie detected
	[  +0.000799] FS-Cache: O-cookie c=000001b7 [p=00000002 fl=222 nc=0 na=1]
	[  +0.000997] FS-Cache: O-cookie d=000000006644c6a1{9P.session} n=0000000059d48210
	[  +0.001156] FS-Cache: O-key=[10] '34333231303139373837'
	[  +0.000780] FS-Cache: N-cookie c=000001b8 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000956] FS-Cache: N-cookie d=000000006644c6a1{9P.session} n=000000007a8030ee
	[  +0.001187] FS-Cache: N-key=[10] '34333231303139373837'
	[Dec22 00:03] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 00:43:29 up 1 day,  7:26,  0 user,  load average: 0.79, 0.37, 0.52
	Linux functional-973657 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 22 00:43:26 functional-973657 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 00:43:27 functional-973657 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 531.
	Dec 22 00:43:27 functional-973657 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:43:27 functional-973657 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:43:27 functional-973657 kubelet[23733]: E1222 00:43:27.567461   23733 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 00:43:27 functional-973657 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 00:43:27 functional-973657 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 00:43:28 functional-973657 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 532.
	Dec 22 00:43:28 functional-973657 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:43:28 functional-973657 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:43:28 functional-973657 kubelet[23791]: E1222 00:43:28.258666   23791 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 00:43:28 functional-973657 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 00:43:28 functional-973657 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 00:43:28 functional-973657 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 533.
	Dec 22 00:43:28 functional-973657 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:43:28 functional-973657 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:43:29 functional-973657 kubelet[23844]: E1222 00:43:29.059153   23844 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 00:43:29 functional-973657 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 00:43:29 functional-973657 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 00:43:29 functional-973657 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 534.
	Dec 22 00:43:29 functional-973657 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:43:29 functional-973657 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 00:43:29 functional-973657 kubelet[23947]: E1222 00:43:29.800393   23947 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 00:43:29 functional-973657 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 00:43:29 functional-973657 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-973657 -n functional-973657
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-973657 -n functional-973657: exit status 2 (379.053758ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-973657" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels (1.46s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel (0.53s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-973657 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-973657 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 103. stderr: I1222 00:40:56.310562 1459386 out.go:360] Setting OutFile to fd 1 ...
I1222 00:40:56.310726 1459386 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1222 00:40:56.310763 1459386 out.go:374] Setting ErrFile to fd 2...
I1222 00:40:56.310784 1459386 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1222 00:40:56.311047 1459386 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1395000/.minikube/bin
I1222 00:40:56.311363 1459386 mustload.go:66] Loading cluster: functional-973657
I1222 00:40:56.311820 1459386 config.go:182] Loaded profile config "functional-973657": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
I1222 00:40:56.312358 1459386 cli_runner.go:164] Run: docker container inspect functional-973657 --format={{.State.Status}}
I1222 00:40:56.338367 1459386 host.go:66] Checking if "functional-973657" exists ...
I1222 00:40:56.338687 1459386 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1222 00:40:56.455792 1459386 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 00:40:56.435481749 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1222 00:40:56.455907 1459386 api_server.go:166] Checking apiserver status ...
I1222 00:40:56.455962 1459386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1222 00:40:56.456004 1459386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
I1222 00:40:56.490711 1459386 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38390 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/functional-973657/id_rsa Username:docker}
W1222 00:40:56.615452 1459386 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1222 00:40:56.619438 1459386 out.go:179] * The control-plane node functional-973657 apiserver is not running: (state=Stopped)
I1222 00:40:56.624257 1459386 out.go:179]   To start a cluster, run: "minikube start -p functional-973657"

                                                
                                                
stdout: * The control-plane node functional-973657 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-973657"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-973657 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-973657 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-973657 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-973657 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 1459385: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-973657 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-973657 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel (0.53s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/Setup (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-973657 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:212: (dbg) Non-zero exit: kubectl --context functional-973657 apply -f testdata/testsvc.yaml: exit status 1 (116.386419ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/testsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:214: kubectl --context functional-973657 apply -f testdata/testsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/Setup (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect (126.1s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://10.108.155.82": Temporary Error: Get "http://10.108.155.82": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-973657 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-973657 get svc nginx-svc: exit status 1 (68.804536ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-973657 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect (126.10s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-973657 create deployment hello-node --image kicbase/echo-server
functional_test.go:1451: (dbg) Non-zero exit: kubectl --context functional-973657 create deployment hello-node --image kicbase/echo-server: exit status 1 (54.779342ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1453: failed to create hello-node deployment with this command "kubectl --context functional-973657 create deployment hello-node --image kicbase/echo-server": exit status 1.
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 service list
functional_test.go:1469: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-973657 service list: exit status 103 (265.533495ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-973657 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-973657"

                                                
                                                
-- /stdout --
functional_test.go:1471: failed to do service list. args "out/minikube-linux-arm64 -p functional-973657 service list" : exit status 103
functional_test.go:1474: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-973657 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-973657\"\n"-
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 service list -o json
functional_test.go:1499: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-973657 service list -o json: exit status 103 (254.222953ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-973657 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-973657"

                                                
                                                
-- /stdout --
functional_test.go:1501: failed to list services with json format. args "out/minikube-linux-arm64 -p functional-973657 service list -o json": exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-973657 service --namespace=default --https --url hello-node: exit status 103 (258.831383ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-973657 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-973657"

                                                
                                                
-- /stdout --
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-973657 service --namespace=default --https --url hello-node" : exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-973657 service hello-node --url --format={{.IP}}: exit status 103 (263.143449ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-973657 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-973657"

                                                
                                                
-- /stdout --
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-973657 service hello-node --url --format={{.IP}}": exit status 103
functional_test.go:1558: "* The control-plane node functional-973657 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-973657\"" is not a valid IP
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-973657 service hello-node --url: exit status 103 (248.772619ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-973657 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-973657"

                                                
                                                
-- /stdout --
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-973657 service hello-node --url": exit status 103
functional_test.go:1575: found endpoint for hello-node: * The control-plane node functional-973657 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-973657"
functional_test.go:1579: failed to parse "* The control-plane node functional-973657 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-973657\"": parse "* The control-plane node functional-973657 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-973657\"": net/url: invalid control character in URL
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port (2.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port
functional_test_mount_test.go:74: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-973657 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun414748394/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:108: wrote "test-1766364190735391703" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun414748394/001/created-by-test
functional_test_mount_test.go:108: wrote "test-1766364190735391703" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun414748394/001/created-by-test-removed-by-pod
functional_test_mount_test.go:108: wrote "test-1766364190735391703" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun414748394/001/test-1766364190735391703
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:116: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-973657 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (350.386053ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1222 00:43:11.086154 1396864 retry.go:84] will retry after 300ms: exit status 1
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 ssh -- ls -la /mount-9p
functional_test_mount_test.go:134: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 22 00:43 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 22 00:43 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 22 00:43 test-1766364190735391703
functional_test_mount_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 ssh cat /mount-9p/test-1766364190735391703
functional_test_mount_test.go:149: (dbg) Run:  kubectl --context functional-973657 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:149: (dbg) Non-zero exit: kubectl --context functional-973657 replace --force -f testdata/busybox-mount-test.yaml: exit status 1 (60.199144ms)

                                                
                                                
** stderr ** 
	error: error when deleting "testdata/busybox-mount-test.yaml": Delete "https://192.168.49.2:8441/api/v1/namespaces/default/pods/busybox-mount": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test_mount_test.go:151: failed to 'kubectl replace' for busybox-mount-test. args "kubectl --context functional-973657 replace --force -f testdata/busybox-mount-test.yaml" : exit status 1
functional_test_mount_test.go:81: "TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port" failed, getting debug info...
functional_test_mount_test.go:82: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates"
functional_test_mount_test.go:82: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-973657 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates": exit status 1 (258.935622ms)

                                                
                                                
-- stdout --
	192.168.49.1 on /mount-9p type 9p (rw,relatime,sync,dirsync,dfltuid=1000,dfltgid=997,access=any,msize=262144,trans=tcp,noextend,port=46035)
	total 2
	-rw-r--r-- 1 docker docker 24 Dec 22 00:43 created-by-test
	-rw-r--r-- 1 docker docker 24 Dec 22 00:43 created-by-test-removed-by-pod
	-rw-r--r-- 1 docker docker 24 Dec 22 00:43 test-1766364190735391703
	cat: /mount-9p/pod-dates: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:84: debugging command "out/minikube-linux-arm64 -p functional-973657 ssh \"mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates\"" failed : exit status 1
functional_test_mount_test.go:91: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:95: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-973657 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun414748394/001:/mount-9p --alsologtostderr -v=1] ...
functional_test_mount_test.go:95: (dbg) [out/minikube-linux-arm64 mount -p functional-973657 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun414748394/001:/mount-9p --alsologtostderr -v=1] stdout:
* Mounting host path /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun414748394/001 into VM as /mount-9p ...
- Mount type:   9p
- User ID:      docker
- Group ID:     docker
- Version:      9p2000.L
- Message Size: 262144
- Options:      map[]
- Bind Address: 192.168.49.1:46035
* Userspace file server: 
ufs starting
* Successfully mounted /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun414748394/001 to /mount-9p

                                                
                                                
* NOTE: This process must stay alive for the mount to be accessible ...
* Unmounting /mount-9p ...

                                                
                                                

                                                
                                                
functional_test_mount_test.go:95: (dbg) [out/minikube-linux-arm64 mount -p functional-973657 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun414748394/001:/mount-9p --alsologtostderr -v=1] stderr:
I1222 00:43:10.794867 1461845 out.go:360] Setting OutFile to fd 1 ...
I1222 00:43:10.795016 1461845 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1222 00:43:10.795033 1461845 out.go:374] Setting ErrFile to fd 2...
I1222 00:43:10.795045 1461845 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1222 00:43:10.795310 1461845 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1395000/.minikube/bin
I1222 00:43:10.795592 1461845 mustload.go:66] Loading cluster: functional-973657
I1222 00:43:10.795992 1461845 config.go:182] Loaded profile config "functional-973657": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
I1222 00:43:10.796559 1461845 cli_runner.go:164] Run: docker container inspect functional-973657 --format={{.State.Status}}
I1222 00:43:10.816742 1461845 host.go:66] Checking if "functional-973657" exists ...
I1222 00:43:10.817051 1461845 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1222 00:43:10.922552 1461845 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 00:43:10.910620189 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1222 00:43:10.922714 1461845 cli_runner.go:164] Run: docker network inspect functional-973657 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1222 00:43:10.951610 1461845 out.go:179] * Mounting host path /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun414748394/001 into VM as /mount-9p ...
I1222 00:43:10.955103 1461845 out.go:179]   - Mount type:   9p
I1222 00:43:10.958168 1461845 out.go:179]   - User ID:      docker
I1222 00:43:10.961532 1461845 out.go:179]   - Group ID:     docker
I1222 00:43:10.964366 1461845 out.go:179]   - Version:      9p2000.L
I1222 00:43:10.967254 1461845 out.go:179]   - Message Size: 262144
I1222 00:43:10.970192 1461845 out.go:179]   - Options:      map[]
I1222 00:43:10.973123 1461845 out.go:179]   - Bind Address: 192.168.49.1:46035
I1222 00:43:10.976085 1461845 out.go:179] * Userspace file server: 
I1222 00:43:10.976367 1461845 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1222 00:43:10.976483 1461845 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
I1222 00:43:11.000815 1461845 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38390 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/functional-973657/id_rsa Username:docker}
I1222 00:43:11.101656 1461845 mount.go:180] unmount for /mount-9p ran successfully
I1222 00:43:11.101688 1461845 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /mount-9p"
I1222 00:43:11.110997 1461845 ssh_runner.go:195] Run: /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=46035,trans=tcp,version=9p2000.L 192.168.49.1 /mount-9p"
I1222 00:43:11.122622 1461845 main.go:127] stdlog: ufs.go:141 connected
I1222 00:43:11.123233 1461845 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43050 Tversion tag 65535 msize 262144 version '9P2000.L'
I1222 00:43:11.123683 1461845 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43050 Rversion tag 65535 msize 262144 version '9P2000'
I1222 00:43:11.125843 1461845 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43050 Tattach tag 0 fid 0 afid 4294967295 uname 'nobody' nuname 0 aname ''
I1222 00:43:11.125935 1461845 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43050 Rattach tag 0 aqid (44322 4382980d 'd')
I1222 00:43:11.126892 1461845 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43050 Tstat tag 0 fid 0
I1222 00:43:11.126994 1461845 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43050 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (44322 4382980d 'd') m d775 at 0 mt 1766364190 l 4096 t 0 d 0 ext )
I1222 00:43:11.131611 1461845 lock.go:50] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/.mount-process: {Name:mk23552b40153bd47de94bc9cc54614782042b03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1222 00:43:11.131871 1461845 mount.go:105] mount successful: ""
I1222 00:43:11.135315 1461845 out.go:179] * Successfully mounted /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun414748394/001 to /mount-9p
I1222 00:43:11.138226 1461845 out.go:203] 
I1222 00:43:11.141170 1461845 out.go:179] * NOTE: This process must stay alive for the mount to be accessible ...
I1222 00:43:11.910614 1461845 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43050 Tstat tag 0 fid 0
I1222 00:43:11.910700 1461845 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43050 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (44322 4382980d 'd') m d775 at 0 mt 1766364190 l 4096 t 0 d 0 ext )
I1222 00:43:11.911092 1461845 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43050 Twalk tag 0 fid 0 newfid 1 
I1222 00:43:11.911133 1461845 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43050 Rwalk tag 0 
I1222 00:43:11.911317 1461845 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43050 Topen tag 0 fid 1 mode 0
I1222 00:43:11.911393 1461845 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43050 Ropen tag 0 qid (44322 4382980d 'd') iounit 0
I1222 00:43:11.911547 1461845 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43050 Tstat tag 0 fid 0
I1222 00:43:11.911589 1461845 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43050 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (44322 4382980d 'd') m d775 at 0 mt 1766364190 l 4096 t 0 d 0 ext )
I1222 00:43:11.911758 1461845 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43050 Tread tag 0 fid 1 offset 0 count 262120
I1222 00:43:11.911882 1461845 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43050 Rread tag 0 count 258
I1222 00:43:11.912019 1461845 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43050 Tread tag 0 fid 1 offset 258 count 261862
I1222 00:43:11.912052 1461845 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43050 Rread tag 0 count 0
I1222 00:43:11.912190 1461845 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43050 Tread tag 0 fid 1 offset 258 count 262120
I1222 00:43:11.912218 1461845 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43050 Rread tag 0 count 0
I1222 00:43:11.912350 1461845 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43050 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1222 00:43:11.912387 1461845 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43050 Rwalk tag 0 (44324 4382980d '') 
I1222 00:43:11.912521 1461845 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43050 Tstat tag 0 fid 2
I1222 00:43:11.912555 1461845 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43050 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (44324 4382980d '') m 644 at 0 mt 1766364190 l 24 t 0 d 0 ext )
I1222 00:43:11.912702 1461845 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43050 Tstat tag 0 fid 2
I1222 00:43:11.912739 1461845 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43050 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (44324 4382980d '') m 644 at 0 mt 1766364190 l 24 t 0 d 0 ext )
I1222 00:43:11.912895 1461845 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43050 Tclunk tag 0 fid 2
I1222 00:43:11.912926 1461845 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43050 Rclunk tag 0
I1222 00:43:11.913060 1461845 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43050 Twalk tag 0 fid 0 newfid 2 0:'test-1766364190735391703' 
I1222 00:43:11.913102 1461845 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43050 Rwalk tag 0 (4436b 4382980d '') 
I1222 00:43:11.913223 1461845 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43050 Tstat tag 0 fid 2
I1222 00:43:11.913258 1461845 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43050 Rstat tag 0 st ('test-1766364190735391703' 'jenkins' 'jenkins' '' q (4436b 4382980d '') m 644 at 0 mt 1766364190 l 24 t 0 d 0 ext )
I1222 00:43:11.913385 1461845 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43050 Tstat tag 0 fid 2
I1222 00:43:11.913419 1461845 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43050 Rstat tag 0 st ('test-1766364190735391703' 'jenkins' 'jenkins' '' q (4436b 4382980d '') m 644 at 0 mt 1766364190 l 24 t 0 d 0 ext )
I1222 00:43:11.913558 1461845 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43050 Tclunk tag 0 fid 2
I1222 00:43:11.913579 1461845 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43050 Rclunk tag 0
I1222 00:43:11.913754 1461845 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43050 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1222 00:43:11.913803 1461845 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43050 Rwalk tag 0 (44340 4382980d '') 
I1222 00:43:11.913924 1461845 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43050 Tstat tag 0 fid 2
I1222 00:43:11.913963 1461845 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43050 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (44340 4382980d '') m 644 at 0 mt 1766364190 l 24 t 0 d 0 ext )
I1222 00:43:11.914095 1461845 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43050 Tstat tag 0 fid 2
I1222 00:43:11.914130 1461845 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43050 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (44340 4382980d '') m 644 at 0 mt 1766364190 l 24 t 0 d 0 ext )
I1222 00:43:11.914258 1461845 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43050 Tclunk tag 0 fid 2
I1222 00:43:11.914283 1461845 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43050 Rclunk tag 0
I1222 00:43:11.914407 1461845 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43050 Tread tag 0 fid 1 offset 258 count 262120
I1222 00:43:11.914437 1461845 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43050 Rread tag 0 count 0
I1222 00:43:11.914584 1461845 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43050 Tclunk tag 0 fid 1
I1222 00:43:11.914620 1461845 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43050 Rclunk tag 0
I1222 00:43:12.178537 1461845 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43050 Twalk tag 0 fid 0 newfid 1 0:'test-1766364190735391703' 
I1222 00:43:12.178636 1461845 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43050 Rwalk tag 0 (4436b 4382980d '') 
I1222 00:43:12.178808 1461845 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43050 Tstat tag 0 fid 1
I1222 00:43:12.178855 1461845 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43050 Rstat tag 0 st ('test-1766364190735391703' 'jenkins' 'jenkins' '' q (4436b 4382980d '') m 644 at 0 mt 1766364190 l 24 t 0 d 0 ext )
I1222 00:43:12.179063 1461845 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43050 Twalk tag 0 fid 1 newfid 2 
I1222 00:43:12.179093 1461845 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43050 Rwalk tag 0 
I1222 00:43:12.179219 1461845 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43050 Topen tag 0 fid 2 mode 0
I1222 00:43:12.179281 1461845 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43050 Ropen tag 0 qid (4436b 4382980d '') iounit 0
I1222 00:43:12.179435 1461845 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43050 Tstat tag 0 fid 1
I1222 00:43:12.179470 1461845 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43050 Rstat tag 0 st ('test-1766364190735391703' 'jenkins' 'jenkins' '' q (4436b 4382980d '') m 644 at 0 mt 1766364190 l 24 t 0 d 0 ext )
I1222 00:43:12.179621 1461845 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43050 Tread tag 0 fid 2 offset 0 count 262120
I1222 00:43:12.179667 1461845 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43050 Rread tag 0 count 24
I1222 00:43:12.179797 1461845 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43050 Tread tag 0 fid 2 offset 24 count 262120
I1222 00:43:12.179825 1461845 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43050 Rread tag 0 count 0
I1222 00:43:12.179972 1461845 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43050 Tread tag 0 fid 2 offset 24 count 262120
I1222 00:43:12.180008 1461845 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43050 Rread tag 0 count 0
I1222 00:43:12.180171 1461845 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43050 Tclunk tag 0 fid 2
I1222 00:43:12.180220 1461845 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43050 Rclunk tag 0
I1222 00:43:12.180465 1461845 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43050 Tclunk tag 0 fid 1
I1222 00:43:12.180496 1461845 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43050 Rclunk tag 0
I1222 00:43:12.502073 1461845 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43050 Tstat tag 0 fid 0
I1222 00:43:12.502215 1461845 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43050 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (44322 4382980d 'd') m d775 at 0 mt 1766364190 l 4096 t 0 d 0 ext )
I1222 00:43:12.502614 1461845 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43050 Twalk tag 0 fid 0 newfid 1 
I1222 00:43:12.502675 1461845 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43050 Rwalk tag 0 
I1222 00:43:12.502819 1461845 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43050 Topen tag 0 fid 1 mode 0
I1222 00:43:12.502887 1461845 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43050 Ropen tag 0 qid (44322 4382980d 'd') iounit 0
I1222 00:43:12.503052 1461845 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43050 Tstat tag 0 fid 0
I1222 00:43:12.503096 1461845 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43050 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (44322 4382980d 'd') m d775 at 0 mt 1766364190 l 4096 t 0 d 0 ext )
I1222 00:43:12.503275 1461845 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43050 Tread tag 0 fid 1 offset 0 count 262120
I1222 00:43:12.503386 1461845 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43050 Rread tag 0 count 258
I1222 00:43:12.503519 1461845 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43050 Tread tag 0 fid 1 offset 258 count 261862
I1222 00:43:12.503563 1461845 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43050 Rread tag 0 count 0
I1222 00:43:12.503719 1461845 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43050 Tread tag 0 fid 1 offset 258 count 262120
I1222 00:43:12.503760 1461845 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43050 Rread tag 0 count 0
I1222 00:43:12.503912 1461845 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43050 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1222 00:43:12.503957 1461845 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43050 Rwalk tag 0 (44324 4382980d '') 
I1222 00:43:12.504115 1461845 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43050 Tstat tag 0 fid 2
I1222 00:43:12.504172 1461845 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43050 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (44324 4382980d '') m 644 at 0 mt 1766364190 l 24 t 0 d 0 ext )
I1222 00:43:12.504329 1461845 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43050 Tstat tag 0 fid 2
I1222 00:43:12.504374 1461845 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43050 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (44324 4382980d '') m 644 at 0 mt 1766364190 l 24 t 0 d 0 ext )
I1222 00:43:12.504521 1461845 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43050 Tclunk tag 0 fid 2
I1222 00:43:12.504554 1461845 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43050 Rclunk tag 0
I1222 00:43:12.504691 1461845 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43050 Twalk tag 0 fid 0 newfid 2 0:'test-1766364190735391703' 
I1222 00:43:12.504737 1461845 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43050 Rwalk tag 0 (4436b 4382980d '') 
I1222 00:43:12.504876 1461845 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43050 Tstat tag 0 fid 2
I1222 00:43:12.504918 1461845 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43050 Rstat tag 0 st ('test-1766364190735391703' 'jenkins' 'jenkins' '' q (4436b 4382980d '') m 644 at 0 mt 1766364190 l 24 t 0 d 0 ext )
I1222 00:43:12.505050 1461845 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43050 Tstat tag 0 fid 2
I1222 00:43:12.505127 1461845 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43050 Rstat tag 0 st ('test-1766364190735391703' 'jenkins' 'jenkins' '' q (4436b 4382980d '') m 644 at 0 mt 1766364190 l 24 t 0 d 0 ext )
I1222 00:43:12.505282 1461845 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43050 Tclunk tag 0 fid 2
I1222 00:43:12.505314 1461845 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43050 Rclunk tag 0
I1222 00:43:12.505469 1461845 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43050 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1222 00:43:12.505519 1461845 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43050 Rwalk tag 0 (44340 4382980d '') 
I1222 00:43:12.505640 1461845 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43050 Tstat tag 0 fid 2
I1222 00:43:12.505690 1461845 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43050 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (44340 4382980d '') m 644 at 0 mt 1766364190 l 24 t 0 d 0 ext )
I1222 00:43:12.505851 1461845 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43050 Tstat tag 0 fid 2
I1222 00:43:12.505904 1461845 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43050 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (44340 4382980d '') m 644 at 0 mt 1766364190 l 24 t 0 d 0 ext )
I1222 00:43:12.506029 1461845 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43050 Tclunk tag 0 fid 2
I1222 00:43:12.506064 1461845 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43050 Rclunk tag 0
I1222 00:43:12.506243 1461845 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43050 Tread tag 0 fid 1 offset 258 count 262120
I1222 00:43:12.506282 1461845 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43050 Rread tag 0 count 0
I1222 00:43:12.506436 1461845 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43050 Tclunk tag 0 fid 1
I1222 00:43:12.506473 1461845 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43050 Rclunk tag 0
I1222 00:43:12.507570 1461845 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43050 Twalk tag 0 fid 0 newfid 1 0:'pod-dates' 
I1222 00:43:12.507645 1461845 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43050 Rerror tag 0 ename 'file not found' ecode 0
I1222 00:43:12.791674 1461845 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:43050 Tclunk tag 0 fid 0
I1222 00:43:12.791726 1461845 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:43050 Rclunk tag 0
I1222 00:43:12.792985 1461845 main.go:127] stdlog: ufs.go:147 disconnected
I1222 00:43:12.822261 1461845 out.go:179] * Unmounting /mount-9p ...
I1222 00:43:12.825250 1461845 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1222 00:43:12.832237 1461845 mount.go:180] unmount for /mount-9p ran successfully
I1222 00:43:12.832345 1461845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/.mount-process: {Name:mk23552b40153bd47de94bc9cc54614782042b03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1222 00:43:12.835372 1461845 out.go:203] 
W1222 00:43:12.838427 1461845 out.go:285] X Exiting due to MK_INTERRUPTED: Received terminated signal
X Exiting due to MK_INTERRUPTED: Received terminated signal
I1222 00:43:12.841344 1461845 out.go:203] 
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port (2.19s)

                                                
                                    
x
+
TestKubernetesUpgrade (796.88s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-108800 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-108800 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (38.070396489s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-108800
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-108800: (1.350190092s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-108800 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-108800 status --format={{.Host}}: exit status 7 (69.847988ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-108800 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-108800 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: exit status 109 (12m32.923040479s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-108800] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22179
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22179-1395000/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1395000/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "kubernetes-upgrade-108800" primary control-plane node in "kubernetes-upgrade-108800" cluster
	* Pulling base image v0.0.48-1766219634-22260 ...
	* Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1222 01:09:12.967896 1585816 out.go:360] Setting OutFile to fd 1 ...
	I1222 01:09:12.968126 1585816 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:09:12.968166 1585816 out.go:374] Setting ErrFile to fd 2...
	I1222 01:09:12.968189 1585816 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:09:12.968487 1585816 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1395000/.minikube/bin
	I1222 01:09:12.968913 1585816 out.go:368] Setting JSON to false
	I1222 01:09:12.969896 1585816 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":114706,"bootTime":1766251047,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1222 01:09:12.969999 1585816 start.go:143] virtualization:  
	I1222 01:09:12.973564 1585816 out.go:179] * [kubernetes-upgrade-108800] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1222 01:09:12.977444 1585816 out.go:179]   - MINIKUBE_LOCATION=22179
	I1222 01:09:12.977542 1585816 notify.go:221] Checking for updates...
	I1222 01:09:12.983155 1585816 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 01:09:12.986129 1585816 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-1395000/kubeconfig
	I1222 01:09:12.988961 1585816 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1395000/.minikube
	I1222 01:09:12.991834 1585816 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1222 01:09:12.994680 1585816 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 01:09:12.998142 1585816 config.go:182] Loaded profile config "kubernetes-upgrade-108800": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1222 01:09:12.998719 1585816 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 01:09:13.037089 1585816 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1222 01:09:13.037224 1585816 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:09:13.095291 1585816 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 01:09:13.085943786 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:09:13.095404 1585816 docker.go:319] overlay module found
	I1222 01:09:13.098541 1585816 out.go:179] * Using the docker driver based on existing profile
	I1222 01:09:13.101397 1585816 start.go:309] selected driver: docker
	I1222 01:09:13.101428 1585816 start.go:928] validating driver "docker" against &{Name:kubernetes-upgrade-108800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-108800 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:09:13.101537 1585816 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 01:09:13.102365 1585816 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:09:13.162833 1585816 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 01:09:13.153753962 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:09:13.163162 1585816 cni.go:84] Creating CNI manager for ""
	I1222 01:09:13.163227 1585816 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1222 01:09:13.163276 1585816 start.go:353] cluster config:
	{Name:kubernetes-upgrade-108800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:kubernetes-upgrade-108800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:09:13.166441 1585816 out.go:179] * Starting "kubernetes-upgrade-108800" primary control-plane node in "kubernetes-upgrade-108800" cluster
	I1222 01:09:13.169292 1585816 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1222 01:09:13.172271 1585816 out.go:179] * Pulling base image v0.0.48-1766219634-22260 ...
	I1222 01:09:13.175205 1585816 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1222 01:09:13.175259 1585816 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4
	I1222 01:09:13.175282 1585816 cache.go:65] Caching tarball of preloaded images
	I1222 01:09:13.175330 1585816 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1222 01:09:13.175415 1585816 preload.go:251] Found /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1222 01:09:13.175426 1585816 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on containerd
	I1222 01:09:13.175571 1585816 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/kubernetes-upgrade-108800/config.json ...
	I1222 01:09:13.207015 1585816 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon, skipping pull
	I1222 01:09:13.207036 1585816 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 exists in daemon, skipping load
	I1222 01:09:13.207050 1585816 cache.go:243] Successfully downloaded all kic artifacts
	I1222 01:09:13.207081 1585816 start.go:360] acquireMachinesLock for kubernetes-upgrade-108800: {Name:mka511dfdcbdea333fd185e411730c7c8b40f7e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:09:13.207135 1585816 start.go:364] duration metric: took 36.611µs to acquireMachinesLock for "kubernetes-upgrade-108800"
	I1222 01:09:13.207155 1585816 start.go:96] Skipping create...Using existing machine configuration
	I1222 01:09:13.207161 1585816 fix.go:54] fixHost starting: 
	I1222 01:09:13.207416 1585816 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-108800 --format={{.State.Status}}
	I1222 01:09:13.236321 1585816 fix.go:112] recreateIfNeeded on kubernetes-upgrade-108800: state=Stopped err=<nil>
	W1222 01:09:13.236346 1585816 fix.go:138] unexpected machine state, will restart: <nil>
	I1222 01:09:13.239797 1585816 out.go:252] * Restarting existing docker container for "kubernetes-upgrade-108800" ...
	I1222 01:09:13.240092 1585816 cli_runner.go:164] Run: docker start kubernetes-upgrade-108800
	I1222 01:09:13.570365 1585816 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-108800 --format={{.State.Status}}
	I1222 01:09:13.598778 1585816 kic.go:430] container "kubernetes-upgrade-108800" state is running.
	I1222 01:09:13.599198 1585816 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-108800
	I1222 01:09:13.622905 1585816 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/kubernetes-upgrade-108800/config.json ...
	I1222 01:09:13.623293 1585816 machine.go:94] provisionDockerMachine start ...
	I1222 01:09:13.623376 1585816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-108800
	I1222 01:09:13.658393 1585816 main.go:144] libmachine: Using SSH client type: native
	I1222 01:09:13.658713 1585816 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38595 <nil> <nil>}
	I1222 01:09:13.658721 1585816 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 01:09:13.659436 1585816 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1222 01:09:16.797863 1585816 main.go:144] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-108800
	
	I1222 01:09:16.797891 1585816 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-108800"
	I1222 01:09:16.797968 1585816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-108800
	I1222 01:09:16.817584 1585816 main.go:144] libmachine: Using SSH client type: native
	I1222 01:09:16.817908 1585816 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38595 <nil> <nil>}
	I1222 01:09:16.817924 1585816 main.go:144] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-108800 && echo "kubernetes-upgrade-108800" | sudo tee /etc/hostname
	I1222 01:09:16.962244 1585816 main.go:144] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-108800
	
	I1222 01:09:16.962325 1585816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-108800
	I1222 01:09:16.981389 1585816 main.go:144] libmachine: Using SSH client type: native
	I1222 01:09:16.981714 1585816 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38595 <nil> <nil>}
	I1222 01:09:16.981736 1585816 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-108800' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-108800/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-108800' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1222 01:09:17.122613 1585816 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 01:09:17.122642 1585816 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22179-1395000/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-1395000/.minikube}
	I1222 01:09:17.122674 1585816 ubuntu.go:190] setting up certificates
	I1222 01:09:17.122692 1585816 provision.go:84] configureAuth start
	I1222 01:09:17.122767 1585816 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-108800
	I1222 01:09:17.145206 1585816 provision.go:143] copyHostCerts
	I1222 01:09:17.145275 1585816 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1395000/.minikube/key.pem, removing ...
	I1222 01:09:17.145294 1585816 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1395000/.minikube/key.pem
	I1222 01:09:17.145374 1585816 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-1395000/.minikube/key.pem (1679 bytes)
	I1222 01:09:17.145482 1585816 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.pem, removing ...
	I1222 01:09:17.145493 1585816 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.pem
	I1222 01:09:17.145521 1585816 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.pem (1082 bytes)
	I1222 01:09:17.145585 1585816 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1395000/.minikube/cert.pem, removing ...
	I1222 01:09:17.145594 1585816 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1395000/.minikube/cert.pem
	I1222 01:09:17.145618 1585816 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-1395000/.minikube/cert.pem (1123 bytes)
	I1222 01:09:17.145675 1585816 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-108800 san=[127.0.0.1 192.168.76.2 kubernetes-upgrade-108800 localhost minikube]
	I1222 01:09:17.269319 1585816 provision.go:177] copyRemoteCerts
	I1222 01:09:17.269388 1585816 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1222 01:09:17.269438 1585816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-108800
	I1222 01:09:17.287415 1585816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38595 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/kubernetes-upgrade-108800/id_rsa Username:docker}
	I1222 01:09:17.381850 1585816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1222 01:09:17.400472 1585816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1222 01:09:17.419887 1585816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1222 01:09:17.439272 1585816 provision.go:87] duration metric: took 316.554063ms to configureAuth
	I1222 01:09:17.439303 1585816 ubuntu.go:206] setting minikube options for container-runtime
	I1222 01:09:17.439525 1585816 config.go:182] Loaded profile config "kubernetes-upgrade-108800": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1222 01:09:17.439538 1585816 machine.go:97] duration metric: took 3.816228925s to provisionDockerMachine
	I1222 01:09:17.439547 1585816 start.go:293] postStartSetup for "kubernetes-upgrade-108800" (driver="docker")
	I1222 01:09:17.439562 1585816 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1222 01:09:17.439634 1585816 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1222 01:09:17.439675 1585816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-108800
	I1222 01:09:17.457447 1585816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38595 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/kubernetes-upgrade-108800/id_rsa Username:docker}
	I1222 01:09:17.554924 1585816 ssh_runner.go:195] Run: cat /etc/os-release
	I1222 01:09:17.558664 1585816 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1222 01:09:17.558713 1585816 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1222 01:09:17.558726 1585816 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1395000/.minikube/addons for local assets ...
	I1222 01:09:17.558810 1585816 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1395000/.minikube/files for local assets ...
	I1222 01:09:17.558909 1585816 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem -> 13968642.pem in /etc/ssl/certs
	I1222 01:09:17.559014 1585816 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1222 01:09:17.566923 1585816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem --> /etc/ssl/certs/13968642.pem (1708 bytes)
	I1222 01:09:17.585083 1585816 start.go:296] duration metric: took 145.520302ms for postStartSetup
	I1222 01:09:17.585163 1585816 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 01:09:17.585213 1585816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-108800
	I1222 01:09:17.604188 1585816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38595 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/kubernetes-upgrade-108800/id_rsa Username:docker}
	I1222 01:09:17.699314 1585816 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1222 01:09:17.704358 1585816 fix.go:56] duration metric: took 4.497190522s for fixHost
	I1222 01:09:17.704384 1585816 start.go:83] releasing machines lock for "kubernetes-upgrade-108800", held for 4.497239933s
	I1222 01:09:17.704474 1585816 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-108800
	I1222 01:09:17.731205 1585816 ssh_runner.go:195] Run: cat /version.json
	I1222 01:09:17.731242 1585816 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1222 01:09:17.731263 1585816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-108800
	I1222 01:09:17.731300 1585816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-108800
	I1222 01:09:17.763624 1585816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38595 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/kubernetes-upgrade-108800/id_rsa Username:docker}
	I1222 01:09:17.764192 1585816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38595 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/kubernetes-upgrade-108800/id_rsa Username:docker}
	I1222 01:09:17.953025 1585816 ssh_runner.go:195] Run: systemctl --version
	I1222 01:09:17.960447 1585816 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1222 01:09:17.965938 1585816 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1222 01:09:17.966026 1585816 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1222 01:09:17.975813 1585816 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1222 01:09:17.975837 1585816 start.go:496] detecting cgroup driver to use...
	I1222 01:09:17.975884 1585816 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 01:09:17.975942 1585816 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1222 01:09:17.994030 1585816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1222 01:09:18.011300 1585816 docker.go:218] disabling cri-docker service (if available) ...
	I1222 01:09:18.011396 1585816 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1222 01:09:18.030707 1585816 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1222 01:09:18.045070 1585816 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1222 01:09:18.160475 1585816 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1222 01:09:18.295539 1585816 docker.go:234] disabling docker service ...
	I1222 01:09:18.295611 1585816 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1222 01:09:18.311753 1585816 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1222 01:09:18.324958 1585816 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1222 01:09:18.437462 1585816 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1222 01:09:18.549149 1585816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1222 01:09:18.562363 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 01:09:18.578534 1585816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1222 01:09:18.588863 1585816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1222 01:09:18.598883 1585816 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1222 01:09:18.598955 1585816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1222 01:09:18.608878 1585816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1222 01:09:18.619327 1585816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1222 01:09:18.628164 1585816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1222 01:09:18.637082 1585816 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1222 01:09:18.645493 1585816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1222 01:09:18.654789 1585816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1222 01:09:18.663980 1585816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1222 01:09:18.674032 1585816 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1222 01:09:18.682244 1585816 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1222 01:09:18.690320 1585816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:09:18.819099 1585816 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1222 01:09:18.985735 1585816 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1222 01:09:18.985843 1585816 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1222 01:09:18.989928 1585816 start.go:564] Will wait 60s for crictl version
	I1222 01:09:18.990021 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:09:18.994016 1585816 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1222 01:09:19.022322 1585816 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.1
	RuntimeApiVersion:  v1
	I1222 01:09:19.022429 1585816 ssh_runner.go:195] Run: containerd --version
	I1222 01:09:19.042733 1585816 ssh_runner.go:195] Run: containerd --version
	I1222 01:09:19.068460 1585816 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.1 ...
	I1222 01:09:19.071478 1585816 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-108800 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 01:09:19.088340 1585816 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1222 01:09:19.092521 1585816 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 01:09:19.102887 1585816 kubeadm.go:884] updating cluster {Name:kubernetes-upgrade-108800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:kubernetes-upgrade-108800 Namespace:default APIServerHAVIP: APIServ
erName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1222 01:09:19.103015 1585816 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1222 01:09:19.103088 1585816 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 01:09:19.128886 1585816 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-rc.1". assuming images are not preloaded.
	I1222 01:09:19.128972 1585816 ssh_runner.go:195] Run: which lz4
	I1222 01:09:19.132831 1585816 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1222 01:09:19.136552 1585816 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1222 01:09:19.136584 1585816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (305659384 bytes)
	I1222 01:09:22.075511 1585816 containerd.go:563] duration metric: took 2.942726488s to copy over tarball
	I1222 01:09:22.075646 1585816 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1222 01:09:24.170274 1585816 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.094595093s)
	I1222 01:09:24.170394 1585816 kubeadm.go:910] preload failed, will try to load cached images: extracting tarball: 
	** stderr ** 
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Europe: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Brazil: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Canada: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Antarctica: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Chile: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Etc: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Pacific: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Mexico: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Australia: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/US: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Asia: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Atlantic: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/America: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Arctic: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Africa: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Indian: Cannot open: File exists
	tar: Exiting with failure status due to previous errors
	
	** /stderr **: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: Process exited with status 2
	stdout:
	
	stderr:
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Europe: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Brazil: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Canada: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Antarctica: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Chile: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Etc: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Pacific: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Mexico: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Australia: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/US: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Asia: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Atlantic: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/America: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Arctic: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Africa: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Indian: Cannot open: File exists
	tar: Exiting with failure status due to previous errors
	I1222 01:09:24.170488 1585816 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 01:09:24.203796 1585816 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-rc.1". assuming images are not preloaded.
	I1222 01:09:24.203821 1585816 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-rc.1 registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 registry.k8s.io/kube-scheduler:v1.35.0-rc.1 registry.k8s.io/kube-proxy:v1.35.0-rc.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.6-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1222 01:09:24.207451 1585816 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1222 01:09:24.207547 1585816 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1222 01:09:24.207741 1585816 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1222 01:09:24.207789 1585816 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1222 01:09:24.207854 1585816 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.6-0
	I1222 01:09:24.207920 1585816 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1222 01:09:24.207938 1585816 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1222 01:09:24.207458 1585816 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1222 01:09:24.209915 1585816 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1222 01:09:24.209976 1585816 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1222 01:09:24.211766 1585816 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1222 01:09:24.211958 1585816 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1222 01:09:24.212225 1585816 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1222 01:09:24.212491 1585816 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1222 01:09:24.213178 1585816 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.6-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.6-0
	I1222 01:09:24.209919 1585816 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1222 01:09:24.563514 1585816 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd"
	I1222 01:09:24.563591 1585816 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1222 01:09:24.566404 1585816 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.13.1" and sha "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf"
	I1222 01:09:24.566525 1585816 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.13.1
	I1222 01:09:24.590103 1585816 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.6-0" and sha "271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57"
	I1222 01:09:24.590246 1585816 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.6-0
	I1222 01:09:24.592341 1585816 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" and sha "abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde"
	I1222 01:09:24.592485 1585816 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1222 01:09:24.608529 1585816 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" and sha "a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a"
	I1222 01:09:24.608655 1585816 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1222 01:09:24.610416 1585816 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" and sha "3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54"
	I1222 01:09:24.610529 1585816 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1222 01:09:24.621500 1585816 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.35.0-rc.1" and sha "7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e"
	I1222 01:09:24.621658 1585816 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1222 01:09:24.671796 1585816 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1222 01:09:24.672391 1585816 cri.go:226] Removing image: registry.k8s.io/pause:3.10.1
	I1222 01:09:24.672471 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:09:24.673457 1585816 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf" in container runtime
	I1222 01:09:24.673496 1585816 cri.go:226] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1222 01:09:24.673542 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:09:24.681786 1585816 cache_images.go:118] "registry.k8s.io/etcd:3.6.6-0" needs transfer: "registry.k8s.io/etcd:3.6.6-0" does not exist at hash "271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57" in container runtime
	I1222 01:09:24.681826 1585816 cri.go:226] Removing image: registry.k8s.io/etcd:3.6.6-0
	I1222 01:09:24.681875 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:09:24.681927 1585816 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" does not exist at hash "abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde" in container runtime
	I1222 01:09:24.681945 1585816 cri.go:226] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1222 01:09:24.681970 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:09:24.686601 1585816 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" does not exist at hash "a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a" in container runtime
	I1222 01:09:24.686643 1585816 cri.go:226] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1222 01:09:24.686698 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:09:24.701044 1585816 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" does not exist at hash "3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54" in container runtime
	I1222 01:09:24.701139 1585816 cri.go:226] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1222 01:09:24.701226 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:09:24.701322 1585816 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-rc.1" does not exist at hash "7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e" in container runtime
	I1222 01:09:24.701369 1585816 cri.go:226] Removing image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1222 01:09:24.701410 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:09:24.701492 1585816 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1222 01:09:24.702779 1585816 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1222 01:09:24.702781 1585816 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1222 01:09:24.702803 1585816 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1222 01:09:24.702826 1585816 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1222 01:09:24.765595 1585816 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1222 01:09:24.765688 1585816 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1222 01:09:24.765748 1585816 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1222 01:09:24.830508 1585816 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1222 01:09:24.830663 1585816 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1222 01:09:24.830719 1585816 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1222 01:09:24.830821 1585816 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1222 01:09:24.869878 1585816 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1222 01:09:24.870048 1585816 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1222 01:09:24.870174 1585816 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1222 01:09:24.949687 1585816 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1222 01:09:24.949884 1585816 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1222 01:09:24.949964 1585816 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1222 01:09:24.950009 1585816 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1222 01:09:24.964029 1585816 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1222 01:09:24.964187 1585816 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1222 01:09:24.964310 1585816 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1222 01:09:25.029291 1585816 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1
	I1222 01:09:25.029407 1585816 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0
	I1222 01:09:25.029478 1585816 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1
	I1222 01:09:25.029546 1585816 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1222 01:09:25.044827 1585816 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1
	I1222 01:09:25.046222 1585816 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1
	W1222 01:09:25.544321 1585816 image.go:328] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1222 01:09:25.544486 1585816 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51"
	I1222 01:09:25.544556 1585816 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1222 01:09:25.569811 1585816 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1222 01:09:25.569865 1585816 cri.go:226] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1222 01:09:25.569920 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:09:25.573577 1585816 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1222 01:09:25.711552 1585816 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1222 01:09:25.712277 1585816 cache_images.go:94] duration metric: took 1.508440177s to LoadCachedImages
	W1222 01:09:25.712363 1585816 out.go:285] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1: no such file or directory
	I1222 01:09:25.712379 1585816 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-rc.1 containerd true true} ...
	I1222 01:09:25.712490 1585816 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-108800 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:kubernetes-upgrade-108800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1222 01:09:25.712565 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
	I1222 01:09:25.752319 1585816 cni.go:84] Creating CNI manager for ""
	I1222 01:09:25.752402 1585816 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1222 01:09:25.752446 1585816 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1222 01:09:25.752504 1585816 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-108800 NodeName:kubernetes-upgrade-108800 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/ce
rts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1222 01:09:25.752651 1585816 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "kubernetes-upgrade-108800"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1222 01:09:25.752771 1585816 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1222 01:09:25.761912 1585816 binaries.go:51] Found k8s binaries, skipping transfer
	I1222 01:09:25.762035 1585816 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1222 01:09:25.770018 1585816 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (334 bytes)
	I1222 01:09:25.784885 1585816 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1222 01:09:25.798657 1585816 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2243 bytes)
	I1222 01:09:25.812676 1585816 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1222 01:09:25.817109 1585816 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 01:09:25.828865 1585816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:09:25.965532 1585816 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 01:09:25.993532 1585816 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/kubernetes-upgrade-108800 for IP: 192.168.76.2
	I1222 01:09:25.993556 1585816 certs.go:195] generating shared ca certs ...
	I1222 01:09:25.993578 1585816 certs.go:227] acquiring lock for ca certs: {Name:mk4e1172e73c8d9b926824a39d7e920772302ed7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:09:25.993721 1585816 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.key
	I1222 01:09:25.993768 1585816 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.key
	I1222 01:09:25.993779 1585816 certs.go:257] generating profile certs ...
	I1222 01:09:25.993861 1585816 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/kubernetes-upgrade-108800/client.key
	I1222 01:09:25.993933 1585816 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/kubernetes-upgrade-108800/apiserver.key.f76217d1
	I1222 01:09:25.993979 1585816 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/kubernetes-upgrade-108800/proxy-client.key
	I1222 01:09:25.994108 1585816 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/1396864.pem (1338 bytes)
	W1222 01:09:25.994148 1585816 certs.go:480] ignoring /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/1396864_empty.pem, impossibly tiny 0 bytes
	I1222 01:09:25.994162 1585816 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca-key.pem (1675 bytes)
	I1222 01:09:25.994187 1585816 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem (1082 bytes)
	I1222 01:09:25.994215 1585816 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem (1123 bytes)
	I1222 01:09:25.994241 1585816 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/key.pem (1679 bytes)
	I1222 01:09:25.994291 1585816 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem (1708 bytes)
	I1222 01:09:25.994888 1585816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1222 01:09:26.027531 1585816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1222 01:09:26.052529 1585816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1222 01:09:26.082668 1585816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1222 01:09:26.105663 1585816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/kubernetes-upgrade-108800/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1222 01:09:26.125766 1585816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/kubernetes-upgrade-108800/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1222 01:09:26.146630 1585816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/kubernetes-upgrade-108800/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1222 01:09:26.167369 1585816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/kubernetes-upgrade-108800/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1222 01:09:26.189101 1585816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1222 01:09:26.210417 1585816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/1396864.pem --> /usr/share/ca-certificates/1396864.pem (1338 bytes)
	I1222 01:09:26.230758 1585816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem --> /usr/share/ca-certificates/13968642.pem (1708 bytes)
	I1222 01:09:26.251700 1585816 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1222 01:09:26.266738 1585816 ssh_runner.go:195] Run: openssl version
	I1222 01:09:26.279696 1585816 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:09:26.288113 1585816 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1222 01:09:26.297052 1585816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:09:26.301891 1585816 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 00:04 /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:09:26.301995 1585816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:09:26.345737 1585816 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1222 01:09:26.353714 1585816 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1396864.pem
	I1222 01:09:26.361828 1585816 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1396864.pem /etc/ssl/certs/1396864.pem
	I1222 01:09:26.370391 1585816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1396864.pem
	I1222 01:09:26.374773 1585816 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 00:13 /usr/share/ca-certificates/1396864.pem
	I1222 01:09:26.374842 1585816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1396864.pem
	I1222 01:09:26.416233 1585816 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1222 01:09:26.424817 1585816 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/13968642.pem
	I1222 01:09:26.432654 1585816 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/13968642.pem /etc/ssl/certs/13968642.pem
	I1222 01:09:26.440506 1585816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13968642.pem
	I1222 01:09:26.445484 1585816 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 00:13 /usr/share/ca-certificates/13968642.pem
	I1222 01:09:26.445558 1585816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13968642.pem
	I1222 01:09:26.488438 1585816 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1222 01:09:26.496924 1585816 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 01:09:26.501222 1585816 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1222 01:09:26.545396 1585816 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1222 01:09:26.587366 1585816 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1222 01:09:26.634032 1585816 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1222 01:09:26.679258 1585816 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1222 01:09:26.723507 1585816 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1222 01:09:26.777604 1585816 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-108800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:kubernetes-upgrade-108800 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:09:26.777696 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1222 01:09:26.777768 1585816 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 01:09:26.808245 1585816 cri.go:96] found id: "57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:09:26.808270 1585816 cri.go:96] found id: "1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:09:26.808275 1585816 cri.go:96] found id: "7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:09:26.808279 1585816 cri.go:96] found id: "ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:09:26.808282 1585816 cri.go:96] found id: ""
	I1222 01:09:26.808336 1585816 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W1222 01:09:26.822937 1585816 kubeadm.go:408] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-22T01:09:26Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I1222 01:09:26.823008 1585816 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1222 01:09:26.830967 1585816 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1222 01:09:26.831039 1585816 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1222 01:09:26.831109 1585816 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1222 01:09:26.838832 1585816 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1222 01:09:26.839442 1585816 kubeconfig.go:47] verify endpoint returned: get endpoint: "kubernetes-upgrade-108800" does not appear in /home/jenkins/minikube-integration/22179-1395000/kubeconfig
	I1222 01:09:26.839687 1585816 kubeconfig.go:62] /home/jenkins/minikube-integration/22179-1395000/kubeconfig needs updating (will repair): [kubeconfig missing "kubernetes-upgrade-108800" cluster setting kubeconfig missing "kubernetes-upgrade-108800" context setting]
	I1222 01:09:26.840137 1585816 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/kubeconfig: {Name:mk08a9ce05fb3d2aec66c2d4776a38c1f64c42df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:09:26.840790 1585816 kapi.go:59] client config for kubernetes-upgrade-108800: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/kubernetes-upgrade-108800/client.crt", KeyFile:"/home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/kubernetes-upgrade-108800/client.key", CAFile:"/home/jenkins/minikube-integration/22179-1395000/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8
(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2001100), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1222 01:09:26.841341 1585816 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1222 01:09:26.841361 1585816 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1222 01:09:26.841368 1585816 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1222 01:09:26.841373 1585816 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1222 01:09:26.841377 1585816 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1222 01:09:26.841643 1585816 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1222 01:09:26.850744 1585816 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-22 01:08:49.000270122 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-22 01:09:25.808619105 +0000
	@@ -1,4 +1,4 @@
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: InitConfiguration
	 localAPIEndpoint:
	   advertiseAddress: 192.168.76.2
	@@ -14,31 +14,34 @@
	   criSocket: unix:///run/containerd/containerd.sock
	   name: "kubernetes-upgrade-108800"
	   kubeletExtraArgs:
	-    node-ip: 192.168.76.2
	+    - name: "node-ip"
	+      value: "192.168.76.2"
	   taints: []
	 ---
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: ClusterConfiguration
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    - name: "enable-admission-plugins"
	+      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	   extraArgs:
	-    allocate-node-cidrs: "true"
	-    leader-elect: "false"
	+    - name: "allocate-node-cidrs"
	+      value: "true"
	+    - name: "leader-elect"
	+      value: "false"
	 scheduler:
	   extraArgs:
	-    leader-elect: "false"
	+    - name: "leader-elect"
	+      value: "false"
	 certificatesDir: /var/lib/minikube/certs
	 clusterName: mk
	 controlPlaneEndpoint: control-plane.minikube.internal:8443
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	-    extraArgs:
	-      proxy-refresh-interval: "70000"
	-kubernetesVersion: v1.28.0
	+kubernetesVersion: v1.35.0-rc.1
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I1222 01:09:26.850773 1585816 kubeadm.go:1161] stopping kube-system containers ...
	I1222 01:09:26.850786 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1222 01:09:26.850847 1585816 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 01:09:26.878970 1585816 cri.go:96] found id: "57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:09:26.878996 1585816 cri.go:96] found id: "1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:09:26.879002 1585816 cri.go:96] found id: "7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:09:26.879006 1585816 cri.go:96] found id: "ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:09:26.879012 1585816 cri.go:96] found id: ""
	I1222 01:09:26.879017 1585816 cri.go:274] Stopping containers: [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c 1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d 7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6 ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e]
	I1222 01:09:26.879088 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:09:26.883027 1585816 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl stop --timeout=10 57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c 1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d 7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6 ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e
	I1222 01:09:26.922714 1585816 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1222 01:09:26.938878 1585816 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 01:09:26.947806 1585816 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5639 Dec 22 01:08 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Dec 22 01:08 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2039 Dec 22 01:09 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Dec 22 01:08 /etc/kubernetes/scheduler.conf
	
	I1222 01:09:26.947923 1585816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1222 01:09:26.956816 1585816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1222 01:09:26.965338 1585816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1222 01:09:26.973622 1585816 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1222 01:09:26.973692 1585816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 01:09:26.989160 1585816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1222 01:09:26.998020 1585816 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1222 01:09:26.998124 1585816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 01:09:27.007644 1585816 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1222 01:09:27.017804 1585816 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1222 01:09:27.065171 1585816 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1222 01:09:28.278912 1585816 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.213701191s)
	I1222 01:09:28.278992 1585816 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1222 01:09:28.583172 1585816 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1222 01:09:28.692452 1585816 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1222 01:09:28.760236 1585816 api_server.go:52] waiting for apiserver process to appear ...
	I1222 01:09:28.760383 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:09:29.260647 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:09:29.760615 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:09:30.260734 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:09:30.760582 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:09:31.261241 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:09:31.761347 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:09:32.261325 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:09:32.761193 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:09:33.261450 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:09:33.760535 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:09:34.260450 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:09:34.761348 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:09:35.260549 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:09:35.761217 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:09:36.261130 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:09:36.760529 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:09:37.260508 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:09:37.760507 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:09:38.260618 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:09:38.761224 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:09:39.261026 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:09:39.761346 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:09:40.261359 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:09:40.760519 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:09:41.261099 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:09:41.761308 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:09:42.260485 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:09:42.760524 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:09:43.261159 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:09:43.760510 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:09:44.260561 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:09:44.760538 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:09:45.261493 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:09:45.760462 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:09:46.260519 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:09:46.760819 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:09:47.261020 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:09:47.760906 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:09:48.260919 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:09:48.761219 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:09:49.260766 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:09:49.760504 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:09:50.260688 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:09:50.761415 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:09:51.260529 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:09:51.760846 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:09:52.260568 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:09:52.761049 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:09:53.260914 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:09:53.760585 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:09:54.261430 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:09:54.760931 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:09:55.261206 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:09:55.761396 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:09:56.260508 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:09:56.760491 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:09:57.261090 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:09:57.760499 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:09:58.260503 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:09:58.760437 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:09:59.261110 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:09:59.760545 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:10:00.260519 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:10:00.760526 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:10:01.261458 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:10:01.760503 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:10:02.260594 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:10:02.760808 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:10:03.261334 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:10:03.760662 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:10:04.261194 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:10:04.761195 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:10:05.261399 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:10:05.760504 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:10:06.261249 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:10:06.760511 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:10:07.261143 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:10:07.760702 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:10:08.261112 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:10:08.760536 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:10:09.261257 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:10:09.760648 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:10:10.261446 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:10:10.760803 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:10:11.260495 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:10:11.761438 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:10:12.260520 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:10:12.760573 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:10:13.261120 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:10:13.760510 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:10:14.260609 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:10:14.760501 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:10:15.261408 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:10:15.760555 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:10:16.260472 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:10:16.761381 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:10:17.260581 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:10:17.761480 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:10:18.260697 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:10:18.760510 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:10:19.260496 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:10:19.761079 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:10:20.261276 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:10:20.760465 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:10:21.261080 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:10:21.760505 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:10:22.260915 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:10:22.760539 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:10:23.261417 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:10:23.760468 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:10:24.260503 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:10:24.760444 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:10:25.261236 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:10:25.761248 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:10:26.261131 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:10:26.761210 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:10:27.261285 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:10:27.760546 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:10:28.261490 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:10:28.761303 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:10:28.761402 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:10:28.825098 1585816 cri.go:96] found id: "57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:10:28.825124 1585816 cri.go:96] found id: ""
	I1222 01:10:28.825134 1585816 logs.go:282] 1 containers: [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c]
	I1222 01:10:28.825189 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:10:28.834777 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:10:28.834854 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:10:28.873582 1585816 cri.go:96] found id: "1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:10:28.873603 1585816 cri.go:96] found id: ""
	I1222 01:10:28.873611 1585816 logs.go:282] 1 containers: [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d]
	I1222 01:10:28.873667 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:10:28.882981 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:10:28.883047 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:10:28.934045 1585816 cri.go:96] found id: ""
	I1222 01:10:28.934068 1585816 logs.go:282] 0 containers: []
	W1222 01:10:28.934094 1585816 logs.go:284] No container was found matching "coredns"
	I1222 01:10:28.934101 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:10:28.934181 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:10:28.992560 1585816 cri.go:96] found id: "ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:10:28.992582 1585816 cri.go:96] found id: ""
	I1222 01:10:28.992590 1585816 logs.go:282] 1 containers: [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e]
	I1222 01:10:28.992649 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:10:29.008995 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:10:29.009081 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:10:29.067575 1585816 cri.go:96] found id: ""
	I1222 01:10:29.067604 1585816 logs.go:282] 0 containers: []
	W1222 01:10:29.067612 1585816 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:10:29.067620 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:10:29.067681 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:10:29.146148 1585816 cri.go:96] found id: "7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:10:29.146223 1585816 cri.go:96] found id: ""
	I1222 01:10:29.146246 1585816 logs.go:282] 1 containers: [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6]
	I1222 01:10:29.146353 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:10:29.154916 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:10:29.155028 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:10:29.201605 1585816 cri.go:96] found id: ""
	I1222 01:10:29.201685 1585816 logs.go:282] 0 containers: []
	W1222 01:10:29.201697 1585816 logs.go:284] No container was found matching "kindnet"
	I1222 01:10:29.201739 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:10:29.201832 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:10:29.259735 1585816 cri.go:96] found id: ""
	I1222 01:10:29.259814 1585816 logs.go:282] 0 containers: []
	W1222 01:10:29.259827 1585816 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:10:29.259873 1585816 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:10:29.259888 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:10:29.369904 1585816 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:10:29.369975 1585816 logs.go:123] Gathering logs for kube-apiserver [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c] ...
	I1222 01:10:29.369991 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:10:29.433047 1585816 logs.go:123] Gathering logs for etcd [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d] ...
	I1222 01:10:29.433136 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:10:29.490093 1585816 logs.go:123] Gathering logs for containerd ...
	I1222 01:10:29.490176 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:10:29.534487 1585816 logs.go:123] Gathering logs for container status ...
	I1222 01:10:29.534563 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:10:29.601189 1585816 logs.go:123] Gathering logs for kubelet ...
	I1222 01:10:29.601269 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:10:29.676948 1585816 logs.go:123] Gathering logs for dmesg ...
	I1222 01:10:29.677024 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:10:29.712035 1585816 logs.go:123] Gathering logs for kube-scheduler [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e] ...
	I1222 01:10:29.712066 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:10:29.789696 1585816 logs.go:123] Gathering logs for kube-controller-manager [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6] ...
	I1222 01:10:29.789781 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:10:32.334546 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:10:32.352568 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:10:32.352644 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:10:32.391448 1585816 cri.go:96] found id: "57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:10:32.391473 1585816 cri.go:96] found id: ""
	I1222 01:10:32.391482 1585816 logs.go:282] 1 containers: [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c]
	I1222 01:10:32.391541 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:10:32.398458 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:10:32.398543 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:10:32.434524 1585816 cri.go:96] found id: "1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:10:32.434552 1585816 cri.go:96] found id: ""
	I1222 01:10:32.434562 1585816 logs.go:282] 1 containers: [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d]
	I1222 01:10:32.434673 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:10:32.441909 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:10:32.441990 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:10:32.496881 1585816 cri.go:96] found id: ""
	I1222 01:10:32.496909 1585816 logs.go:282] 0 containers: []
	W1222 01:10:32.496923 1585816 logs.go:284] No container was found matching "coredns"
	I1222 01:10:32.496929 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:10:32.496990 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:10:32.541409 1585816 cri.go:96] found id: "ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:10:32.541439 1585816 cri.go:96] found id: ""
	I1222 01:10:32.541448 1585816 logs.go:282] 1 containers: [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e]
	I1222 01:10:32.541527 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:10:32.546000 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:10:32.546155 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:10:32.580501 1585816 cri.go:96] found id: ""
	I1222 01:10:32.580528 1585816 logs.go:282] 0 containers: []
	W1222 01:10:32.580543 1585816 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:10:32.580585 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:10:32.580685 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:10:32.612886 1585816 cri.go:96] found id: "7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:10:32.612911 1585816 cri.go:96] found id: ""
	I1222 01:10:32.612948 1585816 logs.go:282] 1 containers: [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6]
	I1222 01:10:32.613039 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:10:32.617160 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:10:32.617300 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:10:32.657527 1585816 cri.go:96] found id: ""
	I1222 01:10:32.657567 1585816 logs.go:282] 0 containers: []
	W1222 01:10:32.657580 1585816 logs.go:284] No container was found matching "kindnet"
	I1222 01:10:32.657609 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:10:32.657739 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:10:32.698294 1585816 cri.go:96] found id: ""
	I1222 01:10:32.698329 1585816 logs.go:282] 0 containers: []
	W1222 01:10:32.698337 1585816 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:10:32.698383 1585816 logs.go:123] Gathering logs for kube-apiserver [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c] ...
	I1222 01:10:32.698402 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:10:32.786667 1585816 logs.go:123] Gathering logs for etcd [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d] ...
	I1222 01:10:32.786710 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:10:32.865843 1585816 logs.go:123] Gathering logs for kube-controller-manager [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6] ...
	I1222 01:10:32.865938 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:10:32.943852 1585816 logs.go:123] Gathering logs for containerd ...
	I1222 01:10:32.943891 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:10:32.997768 1585816 logs.go:123] Gathering logs for container status ...
	I1222 01:10:32.997815 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:10:33.069428 1585816 logs.go:123] Gathering logs for kubelet ...
	I1222 01:10:33.069458 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:10:33.187224 1585816 logs.go:123] Gathering logs for dmesg ...
	I1222 01:10:33.187346 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:10:33.208903 1585816 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:10:33.208930 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:10:33.314731 1585816 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:10:33.314751 1585816 logs.go:123] Gathering logs for kube-scheduler [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e] ...
	I1222 01:10:33.314765 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:10:35.886257 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:10:35.902141 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:10:35.902223 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:10:35.950875 1585816 cri.go:96] found id: "57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:10:35.950920 1585816 cri.go:96] found id: ""
	I1222 01:10:35.950931 1585816 logs.go:282] 1 containers: [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c]
	I1222 01:10:35.950987 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:10:35.955843 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:10:35.955936 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:10:36.021930 1585816 cri.go:96] found id: "1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:10:36.021955 1585816 cri.go:96] found id: ""
	I1222 01:10:36.021964 1585816 logs.go:282] 1 containers: [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d]
	I1222 01:10:36.022024 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:10:36.035583 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:10:36.035676 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:10:36.095865 1585816 cri.go:96] found id: ""
	I1222 01:10:36.095890 1585816 logs.go:282] 0 containers: []
	W1222 01:10:36.095913 1585816 logs.go:284] No container was found matching "coredns"
	I1222 01:10:36.095922 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:10:36.095987 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:10:36.167927 1585816 cri.go:96] found id: "ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:10:36.167952 1585816 cri.go:96] found id: ""
	I1222 01:10:36.167962 1585816 logs.go:282] 1 containers: [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e]
	I1222 01:10:36.168025 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:10:36.175056 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:10:36.175158 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:10:36.215225 1585816 cri.go:96] found id: ""
	I1222 01:10:36.215303 1585816 logs.go:282] 0 containers: []
	W1222 01:10:36.215327 1585816 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:10:36.215350 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:10:36.215471 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:10:36.256802 1585816 cri.go:96] found id: "7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:10:36.256834 1585816 cri.go:96] found id: ""
	I1222 01:10:36.256844 1585816 logs.go:282] 1 containers: [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6]
	I1222 01:10:36.256913 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:10:36.263453 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:10:36.263555 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:10:36.302020 1585816 cri.go:96] found id: ""
	I1222 01:10:36.302048 1585816 logs.go:282] 0 containers: []
	W1222 01:10:36.302062 1585816 logs.go:284] No container was found matching "kindnet"
	I1222 01:10:36.302070 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:10:36.302251 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:10:36.358253 1585816 cri.go:96] found id: ""
	I1222 01:10:36.358280 1585816 logs.go:282] 0 containers: []
	W1222 01:10:36.358289 1585816 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:10:36.358322 1585816 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:10:36.358343 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:10:36.460203 1585816 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:10:36.460229 1585816 logs.go:123] Gathering logs for kube-apiserver [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c] ...
	I1222 01:10:36.460244 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:10:36.500130 1585816 logs.go:123] Gathering logs for etcd [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d] ...
	I1222 01:10:36.500166 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:10:36.534856 1585816 logs.go:123] Gathering logs for kube-scheduler [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e] ...
	I1222 01:10:36.534897 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:10:36.615761 1585816 logs.go:123] Gathering logs for kube-controller-manager [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6] ...
	I1222 01:10:36.615802 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:10:36.653899 1585816 logs.go:123] Gathering logs for containerd ...
	I1222 01:10:36.653933 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:10:36.695411 1585816 logs.go:123] Gathering logs for container status ...
	I1222 01:10:36.695449 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:10:36.749781 1585816 logs.go:123] Gathering logs for kubelet ...
	I1222 01:10:36.749819 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:10:36.826725 1585816 logs.go:123] Gathering logs for dmesg ...
	I1222 01:10:36.826763 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:10:39.350303 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:10:39.363167 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:10:39.363239 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:10:39.402590 1585816 cri.go:96] found id: "57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:10:39.402618 1585816 cri.go:96] found id: ""
	I1222 01:10:39.402628 1585816 logs.go:282] 1 containers: [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c]
	I1222 01:10:39.402685 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:10:39.407279 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:10:39.407384 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:10:39.445197 1585816 cri.go:96] found id: "1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:10:39.445220 1585816 cri.go:96] found id: ""
	I1222 01:10:39.445229 1585816 logs.go:282] 1 containers: [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d]
	I1222 01:10:39.445303 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:10:39.450361 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:10:39.450435 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:10:39.476371 1585816 cri.go:96] found id: ""
	I1222 01:10:39.476396 1585816 logs.go:282] 0 containers: []
	W1222 01:10:39.476433 1585816 logs.go:284] No container was found matching "coredns"
	I1222 01:10:39.476446 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:10:39.476509 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:10:39.504591 1585816 cri.go:96] found id: "ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:10:39.504611 1585816 cri.go:96] found id: ""
	I1222 01:10:39.504619 1585816 logs.go:282] 1 containers: [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e]
	I1222 01:10:39.504675 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:10:39.508802 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:10:39.508874 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:10:39.540619 1585816 cri.go:96] found id: ""
	I1222 01:10:39.540642 1585816 logs.go:282] 0 containers: []
	W1222 01:10:39.540651 1585816 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:10:39.540658 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:10:39.540718 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:10:39.591199 1585816 cri.go:96] found id: "7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:10:39.591219 1585816 cri.go:96] found id: ""
	I1222 01:10:39.591233 1585816 logs.go:282] 1 containers: [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6]
	I1222 01:10:39.591288 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:10:39.596982 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:10:39.597054 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:10:39.641328 1585816 cri.go:96] found id: ""
	I1222 01:10:39.641408 1585816 logs.go:282] 0 containers: []
	W1222 01:10:39.641430 1585816 logs.go:284] No container was found matching "kindnet"
	I1222 01:10:39.641450 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:10:39.641540 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:10:39.697978 1585816 cri.go:96] found id: ""
	I1222 01:10:39.698001 1585816 logs.go:282] 0 containers: []
	W1222 01:10:39.698009 1585816 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:10:39.698022 1585816 logs.go:123] Gathering logs for kube-controller-manager [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6] ...
	I1222 01:10:39.698034 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:10:39.742471 1585816 logs.go:123] Gathering logs for containerd ...
	I1222 01:10:39.742510 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:10:39.780655 1585816 logs.go:123] Gathering logs for kubelet ...
	I1222 01:10:39.782598 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:10:39.856063 1585816 logs.go:123] Gathering logs for dmesg ...
	I1222 01:10:39.856138 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:10:39.887854 1585816 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:10:39.887884 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:10:39.995339 1585816 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:10:39.995358 1585816 logs.go:123] Gathering logs for kube-apiserver [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c] ...
	I1222 01:10:39.995371 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:10:40.116459 1585816 logs.go:123] Gathering logs for container status ...
	I1222 01:10:40.116540 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:10:40.152161 1585816 logs.go:123] Gathering logs for etcd [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d] ...
	I1222 01:10:40.152199 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:10:40.207390 1585816 logs.go:123] Gathering logs for kube-scheduler [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e] ...
	I1222 01:10:40.207429 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:10:42.775042 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:10:42.786776 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:10:42.786850 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:10:42.814997 1585816 cri.go:96] found id: "57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:10:42.815019 1585816 cri.go:96] found id: ""
	I1222 01:10:42.815027 1585816 logs.go:282] 1 containers: [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c]
	I1222 01:10:42.815085 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:10:42.819364 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:10:42.819438 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:10:42.851891 1585816 cri.go:96] found id: "1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:10:42.851911 1585816 cri.go:96] found id: ""
	I1222 01:10:42.851920 1585816 logs.go:282] 1 containers: [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d]
	I1222 01:10:42.851976 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:10:42.856040 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:10:42.856115 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:10:42.882528 1585816 cri.go:96] found id: ""
	I1222 01:10:42.882612 1585816 logs.go:282] 0 containers: []
	W1222 01:10:42.882635 1585816 logs.go:284] No container was found matching "coredns"
	I1222 01:10:42.882673 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:10:42.882758 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:10:42.909949 1585816 cri.go:96] found id: "ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:10:42.909969 1585816 cri.go:96] found id: ""
	I1222 01:10:42.909977 1585816 logs.go:282] 1 containers: [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e]
	I1222 01:10:42.910035 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:10:42.914227 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:10:42.914300 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:10:42.942280 1585816 cri.go:96] found id: ""
	I1222 01:10:42.942304 1585816 logs.go:282] 0 containers: []
	W1222 01:10:42.942313 1585816 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:10:42.942320 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:10:42.942379 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:10:43.013189 1585816 cri.go:96] found id: "7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:10:43.013280 1585816 cri.go:96] found id: ""
	I1222 01:10:43.013304 1585816 logs.go:282] 1 containers: [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6]
	I1222 01:10:43.013395 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:10:43.018245 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:10:43.018368 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:10:43.083333 1585816 cri.go:96] found id: ""
	I1222 01:10:43.083415 1585816 logs.go:282] 0 containers: []
	W1222 01:10:43.083438 1585816 logs.go:284] No container was found matching "kindnet"
	I1222 01:10:43.083461 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:10:43.083576 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:10:43.137392 1585816 cri.go:96] found id: ""
	I1222 01:10:43.137466 1585816 logs.go:282] 0 containers: []
	W1222 01:10:43.137489 1585816 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:10:43.137520 1585816 logs.go:123] Gathering logs for kubelet ...
	I1222 01:10:43.137559 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:10:43.203342 1585816 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:10:43.203419 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:10:43.280480 1585816 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:10:43.280499 1585816 logs.go:123] Gathering logs for kube-scheduler [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e] ...
	I1222 01:10:43.280513 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:10:43.328255 1585816 logs.go:123] Gathering logs for kube-controller-manager [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6] ...
	I1222 01:10:43.328292 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:10:43.368098 1585816 logs.go:123] Gathering logs for dmesg ...
	I1222 01:10:43.368133 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:10:43.386409 1585816 logs.go:123] Gathering logs for kube-apiserver [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c] ...
	I1222 01:10:43.386440 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:10:43.435171 1585816 logs.go:123] Gathering logs for etcd [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d] ...
	I1222 01:10:43.435210 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:10:43.484482 1585816 logs.go:123] Gathering logs for containerd ...
	I1222 01:10:43.484520 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:10:43.528453 1585816 logs.go:123] Gathering logs for container status ...
	I1222 01:10:43.528539 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:10:46.072260 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:10:46.127475 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:10:46.127580 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:10:46.292258 1585816 cri.go:96] found id: "57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:10:46.292345 1585816 cri.go:96] found id: ""
	I1222 01:10:46.292358 1585816 logs.go:282] 1 containers: [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c]
	I1222 01:10:46.292432 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:10:46.304555 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:10:46.304655 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:10:46.394676 1585816 cri.go:96] found id: "1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:10:46.394696 1585816 cri.go:96] found id: ""
	I1222 01:10:46.394710 1585816 logs.go:282] 1 containers: [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d]
	I1222 01:10:46.394771 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:10:46.400546 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:10:46.400627 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:10:46.555538 1585816 cri.go:96] found id: ""
	I1222 01:10:46.555562 1585816 logs.go:282] 0 containers: []
	W1222 01:10:46.555647 1585816 logs.go:284] No container was found matching "coredns"
	I1222 01:10:46.555655 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:10:46.555731 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:10:46.655626 1585816 cri.go:96] found id: "ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:10:46.655646 1585816 cri.go:96] found id: ""
	I1222 01:10:46.655661 1585816 logs.go:282] 1 containers: [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e]
	I1222 01:10:46.655730 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:10:46.661782 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:10:46.661852 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:10:46.729039 1585816 cri.go:96] found id: ""
	I1222 01:10:46.729069 1585816 logs.go:282] 0 containers: []
	W1222 01:10:46.729079 1585816 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:10:46.729086 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:10:46.729149 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:10:46.782342 1585816 cri.go:96] found id: "7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:10:46.782365 1585816 cri.go:96] found id: ""
	I1222 01:10:46.782374 1585816 logs.go:282] 1 containers: [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6]
	I1222 01:10:46.782441 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:10:46.791647 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:10:46.791743 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:10:46.846361 1585816 cri.go:96] found id: ""
	I1222 01:10:46.846388 1585816 logs.go:282] 0 containers: []
	W1222 01:10:46.846397 1585816 logs.go:284] No container was found matching "kindnet"
	I1222 01:10:46.846404 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:10:46.846467 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:10:46.886127 1585816 cri.go:96] found id: ""
	I1222 01:10:46.886153 1585816 logs.go:282] 0 containers: []
	W1222 01:10:46.886161 1585816 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:10:46.886177 1585816 logs.go:123] Gathering logs for kube-scheduler [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e] ...
	I1222 01:10:46.886189 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:10:46.957449 1585816 logs.go:123] Gathering logs for kube-controller-manager [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6] ...
	I1222 01:10:46.957486 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:10:47.073249 1585816 logs.go:123] Gathering logs for kube-apiserver [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c] ...
	I1222 01:10:47.073282 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:10:47.149811 1585816 logs.go:123] Gathering logs for containerd ...
	I1222 01:10:47.149849 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:10:47.183676 1585816 logs.go:123] Gathering logs for container status ...
	I1222 01:10:47.183711 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:10:47.218546 1585816 logs.go:123] Gathering logs for kubelet ...
	I1222 01:10:47.218576 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:10:47.316706 1585816 logs.go:123] Gathering logs for dmesg ...
	I1222 01:10:47.316784 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:10:47.340828 1585816 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:10:47.340853 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:10:47.444345 1585816 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:10:47.444370 1585816 logs.go:123] Gathering logs for etcd [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d] ...
	I1222 01:10:47.444383 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:10:50.010641 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:10:50.029321 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:10:50.029400 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:10:50.078249 1585816 cri.go:96] found id: "57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:10:50.078271 1585816 cri.go:96] found id: ""
	I1222 01:10:50.078279 1585816 logs.go:282] 1 containers: [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c]
	I1222 01:10:50.078342 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:10:50.093660 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:10:50.093736 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:10:50.138018 1585816 cri.go:96] found id: "1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:10:50.138038 1585816 cri.go:96] found id: ""
	I1222 01:10:50.138046 1585816 logs.go:282] 1 containers: [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d]
	I1222 01:10:50.138121 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:10:50.142574 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:10:50.142655 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:10:50.179468 1585816 cri.go:96] found id: ""
	I1222 01:10:50.179497 1585816 logs.go:282] 0 containers: []
	W1222 01:10:50.179506 1585816 logs.go:284] No container was found matching "coredns"
	I1222 01:10:50.179512 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:10:50.179580 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:10:50.212494 1585816 cri.go:96] found id: "ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:10:50.212515 1585816 cri.go:96] found id: ""
	I1222 01:10:50.212524 1585816 logs.go:282] 1 containers: [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e]
	I1222 01:10:50.212584 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:10:50.217087 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:10:50.217159 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:10:50.255485 1585816 cri.go:96] found id: ""
	I1222 01:10:50.255508 1585816 logs.go:282] 0 containers: []
	W1222 01:10:50.255518 1585816 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:10:50.255524 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:10:50.255584 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:10:50.292227 1585816 cri.go:96] found id: "7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:10:50.292248 1585816 cri.go:96] found id: ""
	I1222 01:10:50.292256 1585816 logs.go:282] 1 containers: [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6]
	I1222 01:10:50.292315 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:10:50.296789 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:10:50.296862 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:10:50.364037 1585816 cri.go:96] found id: ""
	I1222 01:10:50.364060 1585816 logs.go:282] 0 containers: []
	W1222 01:10:50.364069 1585816 logs.go:284] No container was found matching "kindnet"
	I1222 01:10:50.364076 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:10:50.364137 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:10:50.392551 1585816 cri.go:96] found id: ""
	I1222 01:10:50.392574 1585816 logs.go:282] 0 containers: []
	W1222 01:10:50.392583 1585816 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:10:50.392597 1585816 logs.go:123] Gathering logs for dmesg ...
	I1222 01:10:50.392608 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:10:50.408750 1585816 logs.go:123] Gathering logs for etcd [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d] ...
	I1222 01:10:50.408775 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:10:50.460568 1585816 logs.go:123] Gathering logs for kubelet ...
	I1222 01:10:50.460646 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:10:50.531818 1585816 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:10:50.531910 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:10:50.612076 1585816 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:10:50.612096 1585816 logs.go:123] Gathering logs for kube-apiserver [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c] ...
	I1222 01:10:50.612110 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:10:50.654307 1585816 logs.go:123] Gathering logs for kube-scheduler [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e] ...
	I1222 01:10:50.654381 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:10:50.706297 1585816 logs.go:123] Gathering logs for kube-controller-manager [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6] ...
	I1222 01:10:50.706393 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:10:50.791437 1585816 logs.go:123] Gathering logs for containerd ...
	I1222 01:10:50.794244 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:10:50.856154 1585816 logs.go:123] Gathering logs for container status ...
	I1222 01:10:50.856233 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:10:53.402969 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:10:53.414435 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:10:53.414510 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:10:53.451320 1585816 cri.go:96] found id: "57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:10:53.451341 1585816 cri.go:96] found id: ""
	I1222 01:10:53.451349 1585816 logs.go:282] 1 containers: [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c]
	I1222 01:10:53.451413 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:10:53.455778 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:10:53.455851 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:10:53.485700 1585816 cri.go:96] found id: "1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:10:53.485720 1585816 cri.go:96] found id: ""
	I1222 01:10:53.485728 1585816 logs.go:282] 1 containers: [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d]
	I1222 01:10:53.485785 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:10:53.490052 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:10:53.490147 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:10:53.519999 1585816 cri.go:96] found id: ""
	I1222 01:10:53.520024 1585816 logs.go:282] 0 containers: []
	W1222 01:10:53.520033 1585816 logs.go:284] No container was found matching "coredns"
	I1222 01:10:53.520039 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:10:53.520101 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:10:53.556590 1585816 cri.go:96] found id: "ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:10:53.556612 1585816 cri.go:96] found id: ""
	I1222 01:10:53.556620 1585816 logs.go:282] 1 containers: [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e]
	I1222 01:10:53.556690 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:10:53.561203 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:10:53.561283 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:10:53.592585 1585816 cri.go:96] found id: ""
	I1222 01:10:53.592611 1585816 logs.go:282] 0 containers: []
	W1222 01:10:53.592620 1585816 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:10:53.592627 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:10:53.592688 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:10:53.629418 1585816 cri.go:96] found id: "7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:10:53.629439 1585816 cri.go:96] found id: ""
	I1222 01:10:53.629447 1585816 logs.go:282] 1 containers: [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6]
	I1222 01:10:53.629508 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:10:53.633821 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:10:53.633940 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:10:53.675573 1585816 cri.go:96] found id: ""
	I1222 01:10:53.675656 1585816 logs.go:282] 0 containers: []
	W1222 01:10:53.675681 1585816 logs.go:284] No container was found matching "kindnet"
	I1222 01:10:53.675722 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:10:53.675809 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:10:53.708628 1585816 cri.go:96] found id: ""
	I1222 01:10:53.708702 1585816 logs.go:282] 0 containers: []
	W1222 01:10:53.708724 1585816 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:10:53.708755 1585816 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:10:53.708799 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:10:53.805024 1585816 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:10:53.805104 1585816 logs.go:123] Gathering logs for kube-apiserver [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c] ...
	I1222 01:10:53.805133 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:10:53.870631 1585816 logs.go:123] Gathering logs for kube-scheduler [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e] ...
	I1222 01:10:53.870711 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:10:53.911029 1585816 logs.go:123] Gathering logs for containerd ...
	I1222 01:10:53.911127 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:10:53.948364 1585816 logs.go:123] Gathering logs for kubelet ...
	I1222 01:10:53.948418 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:10:54.031090 1585816 logs.go:123] Gathering logs for etcd [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d] ...
	I1222 01:10:54.031222 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:10:54.092848 1585816 logs.go:123] Gathering logs for kube-controller-manager [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6] ...
	I1222 01:10:54.092942 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:10:54.143365 1585816 logs.go:123] Gathering logs for container status ...
	I1222 01:10:54.143440 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:10:54.195982 1585816 logs.go:123] Gathering logs for dmesg ...
	I1222 01:10:54.196051 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:10:56.714168 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:10:56.730466 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:10:56.730561 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:10:56.769460 1585816 cri.go:96] found id: "57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:10:56.769481 1585816 cri.go:96] found id: ""
	I1222 01:10:56.769492 1585816 logs.go:282] 1 containers: [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c]
	I1222 01:10:56.769562 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:10:56.775130 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:10:56.775323 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:10:56.814746 1585816 cri.go:96] found id: "1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:10:56.814778 1585816 cri.go:96] found id: ""
	I1222 01:10:56.814789 1585816 logs.go:282] 1 containers: [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d]
	I1222 01:10:56.814851 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:10:56.820573 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:10:56.820668 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:10:56.859733 1585816 cri.go:96] found id: ""
	I1222 01:10:56.859763 1585816 logs.go:282] 0 containers: []
	W1222 01:10:56.859772 1585816 logs.go:284] No container was found matching "coredns"
	I1222 01:10:56.859779 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:10:56.859871 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:10:56.916865 1585816 cri.go:96] found id: "ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:10:56.916982 1585816 cri.go:96] found id: ""
	I1222 01:10:56.916996 1585816 logs.go:282] 1 containers: [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e]
	I1222 01:10:56.917125 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:10:56.922190 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:10:56.922324 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:10:56.968890 1585816 cri.go:96] found id: ""
	I1222 01:10:56.968966 1585816 logs.go:282] 0 containers: []
	W1222 01:10:56.968995 1585816 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:10:56.969030 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:10:56.969143 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:10:57.034924 1585816 cri.go:96] found id: "7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:10:57.034997 1585816 cri.go:96] found id: ""
	I1222 01:10:57.035020 1585816 logs.go:282] 1 containers: [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6]
	I1222 01:10:57.035110 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:10:57.039360 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:10:57.039497 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:10:57.096380 1585816 cri.go:96] found id: ""
	I1222 01:10:57.096469 1585816 logs.go:282] 0 containers: []
	W1222 01:10:57.096500 1585816 logs.go:284] No container was found matching "kindnet"
	I1222 01:10:57.096536 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:10:57.096638 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:10:57.140647 1585816 cri.go:96] found id: ""
	I1222 01:10:57.140670 1585816 logs.go:282] 0 containers: []
	W1222 01:10:57.140679 1585816 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:10:57.140694 1585816 logs.go:123] Gathering logs for container status ...
	I1222 01:10:57.140705 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:10:57.185491 1585816 logs.go:123] Gathering logs for kubelet ...
	I1222 01:10:57.185519 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:10:57.257568 1585816 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:10:57.257700 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:10:57.373136 1585816 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:10:57.373160 1585816 logs.go:123] Gathering logs for etcd [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d] ...
	I1222 01:10:57.373175 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:10:57.418893 1585816 logs.go:123] Gathering logs for dmesg ...
	I1222 01:10:57.418930 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:10:57.443751 1585816 logs.go:123] Gathering logs for kube-apiserver [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c] ...
	I1222 01:10:57.443833 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:10:57.515598 1585816 logs.go:123] Gathering logs for kube-scheduler [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e] ...
	I1222 01:10:57.515689 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:10:57.571430 1585816 logs.go:123] Gathering logs for kube-controller-manager [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6] ...
	I1222 01:10:57.571510 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:10:57.623983 1585816 logs.go:123] Gathering logs for containerd ...
	I1222 01:10:57.624056 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:11:00.160680 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:11:00.231877 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:11:00.231966 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:11:00.350233 1585816 cri.go:96] found id: "57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:11:00.350259 1585816 cri.go:96] found id: ""
	I1222 01:11:00.350268 1585816 logs.go:282] 1 containers: [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c]
	I1222 01:11:00.350336 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:11:00.364788 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:11:00.364877 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:11:00.402842 1585816 cri.go:96] found id: "1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:11:00.402878 1585816 cri.go:96] found id: ""
	I1222 01:11:00.402887 1585816 logs.go:282] 1 containers: [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d]
	I1222 01:11:00.402957 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:11:00.408279 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:11:00.408531 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:11:00.443764 1585816 cri.go:96] found id: ""
	I1222 01:11:00.443795 1585816 logs.go:282] 0 containers: []
	W1222 01:11:00.443806 1585816 logs.go:284] No container was found matching "coredns"
	I1222 01:11:00.443831 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:11:00.443911 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:11:00.477996 1585816 cri.go:96] found id: "ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:11:00.478020 1585816 cri.go:96] found id: ""
	I1222 01:11:00.478029 1585816 logs.go:282] 1 containers: [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e]
	I1222 01:11:00.478131 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:11:00.488848 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:11:00.488972 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:11:00.520275 1585816 cri.go:96] found id: ""
	I1222 01:11:00.520300 1585816 logs.go:282] 0 containers: []
	W1222 01:11:00.520309 1585816 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:11:00.520317 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:11:00.520428 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:11:00.551868 1585816 cri.go:96] found id: "7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:11:00.551902 1585816 cri.go:96] found id: ""
	I1222 01:11:00.551912 1585816 logs.go:282] 1 containers: [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6]
	I1222 01:11:00.551999 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:11:00.556281 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:11:00.556409 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:11:00.583353 1585816 cri.go:96] found id: ""
	I1222 01:11:00.583452 1585816 logs.go:282] 0 containers: []
	W1222 01:11:00.583468 1585816 logs.go:284] No container was found matching "kindnet"
	I1222 01:11:00.583476 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:11:00.583553 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:11:00.612838 1585816 cri.go:96] found id: ""
	I1222 01:11:00.612863 1585816 logs.go:282] 0 containers: []
	W1222 01:11:00.612871 1585816 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:11:00.612887 1585816 logs.go:123] Gathering logs for kubelet ...
	I1222 01:11:00.612899 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:11:00.692005 1585816 logs.go:123] Gathering logs for etcd [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d] ...
	I1222 01:11:00.692045 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:11:00.771181 1585816 logs.go:123] Gathering logs for kube-scheduler [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e] ...
	I1222 01:11:00.771212 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:11:00.841480 1585816 logs.go:123] Gathering logs for containerd ...
	I1222 01:11:00.841519 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:11:00.881424 1585816 logs.go:123] Gathering logs for container status ...
	I1222 01:11:00.881469 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:11:00.932302 1585816 logs.go:123] Gathering logs for dmesg ...
	I1222 01:11:00.932334 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:11:00.954939 1585816 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:11:00.954971 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:11:01.095340 1585816 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:11:01.095359 1585816 logs.go:123] Gathering logs for kube-apiserver [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c] ...
	I1222 01:11:01.095374 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:11:01.161232 1585816 logs.go:123] Gathering logs for kube-controller-manager [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6] ...
	I1222 01:11:01.161266 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:11:03.746775 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:11:03.758303 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:11:03.758373 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:11:03.793229 1585816 cri.go:96] found id: "57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:11:03.793250 1585816 cri.go:96] found id: ""
	I1222 01:11:03.793258 1585816 logs.go:282] 1 containers: [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c]
	I1222 01:11:03.793316 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:11:03.797916 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:11:03.797991 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:11:03.835883 1585816 cri.go:96] found id: "1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:11:03.835905 1585816 cri.go:96] found id: ""
	I1222 01:11:03.835912 1585816 logs.go:282] 1 containers: [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d]
	I1222 01:11:03.835977 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:11:03.840218 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:11:03.840353 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:11:03.869777 1585816 cri.go:96] found id: ""
	I1222 01:11:03.869854 1585816 logs.go:282] 0 containers: []
	W1222 01:11:03.869874 1585816 logs.go:284] No container was found matching "coredns"
	I1222 01:11:03.869893 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:11:03.869980 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:11:03.908579 1585816 cri.go:96] found id: "ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:11:03.908658 1585816 cri.go:96] found id: ""
	I1222 01:11:03.908679 1585816 logs.go:282] 1 containers: [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e]
	I1222 01:11:03.908770 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:11:03.915975 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:11:03.916103 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:11:03.955724 1585816 cri.go:96] found id: ""
	I1222 01:11:03.955799 1585816 logs.go:282] 0 containers: []
	W1222 01:11:03.955822 1585816 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:11:03.955844 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:11:03.955954 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:11:04.013695 1585816 cri.go:96] found id: "7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:11:04.013772 1585816 cri.go:96] found id: ""
	I1222 01:11:04.013796 1585816 logs.go:282] 1 containers: [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6]
	I1222 01:11:04.013888 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:11:04.022798 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:11:04.022941 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:11:04.097430 1585816 cri.go:96] found id: ""
	I1222 01:11:04.097508 1585816 logs.go:282] 0 containers: []
	W1222 01:11:04.097530 1585816 logs.go:284] No container was found matching "kindnet"
	I1222 01:11:04.097555 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:11:04.097668 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:11:04.156674 1585816 cri.go:96] found id: ""
	I1222 01:11:04.156748 1585816 logs.go:282] 0 containers: []
	W1222 01:11:04.156770 1585816 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:11:04.156810 1585816 logs.go:123] Gathering logs for kubelet ...
	I1222 01:11:04.156842 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:11:04.244887 1585816 logs.go:123] Gathering logs for dmesg ...
	I1222 01:11:04.244971 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:11:04.265256 1585816 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:11:04.265280 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:11:04.363232 1585816 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:11:04.363249 1585816 logs.go:123] Gathering logs for kube-apiserver [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c] ...
	I1222 01:11:04.363262 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:11:04.448794 1585816 logs.go:123] Gathering logs for kube-controller-manager [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6] ...
	I1222 01:11:04.448871 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:11:04.499903 1585816 logs.go:123] Gathering logs for containerd ...
	I1222 01:11:04.499978 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:11:04.532174 1585816 logs.go:123] Gathering logs for etcd [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d] ...
	I1222 01:11:04.532237 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:11:04.578236 1585816 logs.go:123] Gathering logs for kube-scheduler [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e] ...
	I1222 01:11:04.578272 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:11:04.621679 1585816 logs.go:123] Gathering logs for container status ...
	I1222 01:11:04.621717 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:11:07.198550 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:11:07.209204 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:11:07.209289 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:11:07.244585 1585816 cri.go:96] found id: "57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:11:07.244606 1585816 cri.go:96] found id: ""
	I1222 01:11:07.244615 1585816 logs.go:282] 1 containers: [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c]
	I1222 01:11:07.244683 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:11:07.249024 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:11:07.249104 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:11:07.284636 1585816 cri.go:96] found id: "1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:11:07.284663 1585816 cri.go:96] found id: ""
	I1222 01:11:07.284671 1585816 logs.go:282] 1 containers: [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d]
	I1222 01:11:07.284729 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:11:07.289233 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:11:07.289313 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:11:07.320366 1585816 cri.go:96] found id: ""
	I1222 01:11:07.320403 1585816 logs.go:282] 0 containers: []
	W1222 01:11:07.320413 1585816 logs.go:284] No container was found matching "coredns"
	I1222 01:11:07.320419 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:11:07.320478 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:11:07.350554 1585816 cri.go:96] found id: "ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:11:07.350581 1585816 cri.go:96] found id: ""
	I1222 01:11:07.350590 1585816 logs.go:282] 1 containers: [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e]
	I1222 01:11:07.350646 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:11:07.355032 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:11:07.355111 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:11:07.388338 1585816 cri.go:96] found id: ""
	I1222 01:11:07.388368 1585816 logs.go:282] 0 containers: []
	W1222 01:11:07.388376 1585816 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:11:07.388383 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:11:07.388450 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:11:07.416936 1585816 cri.go:96] found id: "7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:11:07.416964 1585816 cri.go:96] found id: ""
	I1222 01:11:07.416972 1585816 logs.go:282] 1 containers: [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6]
	I1222 01:11:07.417030 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:11:07.421648 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:11:07.421735 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:11:07.452527 1585816 cri.go:96] found id: ""
	I1222 01:11:07.452558 1585816 logs.go:282] 0 containers: []
	W1222 01:11:07.452568 1585816 logs.go:284] No container was found matching "kindnet"
	I1222 01:11:07.452574 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:11:07.452636 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:11:07.485310 1585816 cri.go:96] found id: ""
	I1222 01:11:07.485349 1585816 logs.go:282] 0 containers: []
	W1222 01:11:07.485358 1585816 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:11:07.485373 1585816 logs.go:123] Gathering logs for kube-scheduler [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e] ...
	I1222 01:11:07.485385 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:11:07.538301 1585816 logs.go:123] Gathering logs for kube-controller-manager [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6] ...
	I1222 01:11:07.538536 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:11:07.601413 1585816 logs.go:123] Gathering logs for containerd ...
	I1222 01:11:07.601488 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:11:07.632090 1585816 logs.go:123] Gathering logs for container status ...
	I1222 01:11:07.632168 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:11:07.679179 1585816 logs.go:123] Gathering logs for kube-apiserver [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c] ...
	I1222 01:11:07.679259 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:11:07.721355 1585816 logs.go:123] Gathering logs for etcd [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d] ...
	I1222 01:11:07.721432 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:11:07.775298 1585816 logs.go:123] Gathering logs for kubelet ...
	I1222 01:11:07.775371 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:11:07.846651 1585816 logs.go:123] Gathering logs for dmesg ...
	I1222 01:11:07.846731 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:11:07.867543 1585816 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:11:07.867579 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:11:07.970420 1585816 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:11:10.472020 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:11:10.483451 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:11:10.483529 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:11:10.539586 1585816 cri.go:96] found id: "57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:11:10.539606 1585816 cri.go:96] found id: ""
	I1222 01:11:10.539614 1585816 logs.go:282] 1 containers: [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c]
	I1222 01:11:10.539668 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:11:10.543710 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:11:10.543782 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:11:10.580336 1585816 cri.go:96] found id: "1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:11:10.580355 1585816 cri.go:96] found id: ""
	I1222 01:11:10.580364 1585816 logs.go:282] 1 containers: [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d]
	I1222 01:11:10.580436 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:11:10.585748 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:11:10.585828 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:11:10.656388 1585816 cri.go:96] found id: ""
	I1222 01:11:10.656422 1585816 logs.go:282] 0 containers: []
	W1222 01:11:10.656431 1585816 logs.go:284] No container was found matching "coredns"
	I1222 01:11:10.656438 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:11:10.656507 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:11:10.708703 1585816 cri.go:96] found id: "ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:11:10.708725 1585816 cri.go:96] found id: ""
	I1222 01:11:10.708733 1585816 logs.go:282] 1 containers: [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e]
	I1222 01:11:10.708796 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:11:10.713315 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:11:10.713389 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:11:10.760016 1585816 cri.go:96] found id: ""
	I1222 01:11:10.760040 1585816 logs.go:282] 0 containers: []
	W1222 01:11:10.760050 1585816 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:11:10.760058 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:11:10.760119 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:11:10.802073 1585816 cri.go:96] found id: "7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:11:10.802108 1585816 cri.go:96] found id: ""
	I1222 01:11:10.802116 1585816 logs.go:282] 1 containers: [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6]
	I1222 01:11:10.802183 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:11:10.808528 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:11:10.808610 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:11:10.845898 1585816 cri.go:96] found id: ""
	I1222 01:11:10.845921 1585816 logs.go:282] 0 containers: []
	W1222 01:11:10.845929 1585816 logs.go:284] No container was found matching "kindnet"
	I1222 01:11:10.845942 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:11:10.846006 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:11:10.898027 1585816 cri.go:96] found id: ""
	I1222 01:11:10.898050 1585816 logs.go:282] 0 containers: []
	W1222 01:11:10.898058 1585816 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:11:10.898072 1585816 logs.go:123] Gathering logs for dmesg ...
	I1222 01:11:10.898136 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:11:10.920812 1585816 logs.go:123] Gathering logs for kube-controller-manager [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6] ...
	I1222 01:11:10.920842 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:11:10.974754 1585816 logs.go:123] Gathering logs for container status ...
	I1222 01:11:10.974786 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:11:11.028460 1585816 logs.go:123] Gathering logs for kubelet ...
	I1222 01:11:11.028490 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:11:11.114800 1585816 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:11:11.114844 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:11:11.214150 1585816 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:11:11.214185 1585816 logs.go:123] Gathering logs for kube-apiserver [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c] ...
	I1222 01:11:11.214199 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:11:11.298436 1585816 logs.go:123] Gathering logs for etcd [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d] ...
	I1222 01:11:11.298466 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:11:11.355627 1585816 logs.go:123] Gathering logs for kube-scheduler [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e] ...
	I1222 01:11:11.355681 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:11:11.412757 1585816 logs.go:123] Gathering logs for containerd ...
	I1222 01:11:11.412838 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:11:13.947257 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:11:13.963531 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:11:13.963616 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:11:14.016600 1585816 cri.go:96] found id: "57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:11:14.016622 1585816 cri.go:96] found id: ""
	I1222 01:11:14.016631 1585816 logs.go:282] 1 containers: [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c]
	I1222 01:11:14.016706 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:11:14.026045 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:11:14.026208 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:11:14.062393 1585816 cri.go:96] found id: "1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:11:14.062414 1585816 cri.go:96] found id: ""
	I1222 01:11:14.062422 1585816 logs.go:282] 1 containers: [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d]
	I1222 01:11:14.062488 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:11:14.073303 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:11:14.073381 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:11:14.111918 1585816 cri.go:96] found id: ""
	I1222 01:11:14.111944 1585816 logs.go:282] 0 containers: []
	W1222 01:11:14.111953 1585816 logs.go:284] No container was found matching "coredns"
	I1222 01:11:14.111960 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:11:14.112026 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:11:14.160641 1585816 cri.go:96] found id: "ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:11:14.160720 1585816 cri.go:96] found id: ""
	I1222 01:11:14.160743 1585816 logs.go:282] 1 containers: [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e]
	I1222 01:11:14.160831 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:11:14.172021 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:11:14.172151 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:11:14.232917 1585816 cri.go:96] found id: ""
	I1222 01:11:14.232996 1585816 logs.go:282] 0 containers: []
	W1222 01:11:14.233019 1585816 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:11:14.233041 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:11:14.233141 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:11:14.310431 1585816 cri.go:96] found id: "7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:11:14.310510 1585816 cri.go:96] found id: ""
	I1222 01:11:14.310532 1585816 logs.go:282] 1 containers: [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6]
	I1222 01:11:14.310620 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:11:14.318638 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:11:14.318773 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:11:14.385141 1585816 cri.go:96] found id: ""
	I1222 01:11:14.385222 1585816 logs.go:282] 0 containers: []
	W1222 01:11:14.385245 1585816 logs.go:284] No container was found matching "kindnet"
	I1222 01:11:14.385286 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:11:14.385373 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:11:14.424278 1585816 cri.go:96] found id: ""
	I1222 01:11:14.424362 1585816 logs.go:282] 0 containers: []
	W1222 01:11:14.424385 1585816 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:11:14.424437 1585816 logs.go:123] Gathering logs for dmesg ...
	I1222 01:11:14.424474 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:11:14.451579 1585816 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:11:14.451662 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:11:14.575662 1585816 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:11:14.575737 1585816 logs.go:123] Gathering logs for kube-apiserver [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c] ...
	I1222 01:11:14.575764 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:11:14.645850 1585816 logs.go:123] Gathering logs for kube-controller-manager [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6] ...
	I1222 01:11:14.645928 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:11:14.700526 1585816 logs.go:123] Gathering logs for container status ...
	I1222 01:11:14.700616 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:11:14.753416 1585816 logs.go:123] Gathering logs for kubelet ...
	I1222 01:11:14.753491 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:11:14.837917 1585816 logs.go:123] Gathering logs for etcd [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d] ...
	I1222 01:11:14.837998 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:11:14.888386 1585816 logs.go:123] Gathering logs for kube-scheduler [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e] ...
	I1222 01:11:14.888490 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:11:14.955913 1585816 logs.go:123] Gathering logs for containerd ...
	I1222 01:11:14.955987 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:11:17.511070 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:11:17.522436 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:11:17.522509 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:11:17.548330 1585816 cri.go:96] found id: "57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:11:17.548353 1585816 cri.go:96] found id: ""
	I1222 01:11:17.548362 1585816 logs.go:282] 1 containers: [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c]
	I1222 01:11:17.548428 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:11:17.552347 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:11:17.552429 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:11:17.577723 1585816 cri.go:96] found id: "1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:11:17.577748 1585816 cri.go:96] found id: ""
	I1222 01:11:17.577758 1585816 logs.go:282] 1 containers: [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d]
	I1222 01:11:17.577817 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:11:17.581667 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:11:17.581744 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:11:17.613918 1585816 cri.go:96] found id: ""
	I1222 01:11:17.613943 1585816 logs.go:282] 0 containers: []
	W1222 01:11:17.613952 1585816 logs.go:284] No container was found matching "coredns"
	I1222 01:11:17.613959 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:11:17.614022 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:11:17.641473 1585816 cri.go:96] found id: "ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:11:17.641547 1585816 cri.go:96] found id: ""
	I1222 01:11:17.641565 1585816 logs.go:282] 1 containers: [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e]
	I1222 01:11:17.641653 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:11:17.645580 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:11:17.645665 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:11:17.673539 1585816 cri.go:96] found id: ""
	I1222 01:11:17.673567 1585816 logs.go:282] 0 containers: []
	W1222 01:11:17.673576 1585816 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:11:17.673583 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:11:17.673714 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:11:17.701397 1585816 cri.go:96] found id: "7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:11:17.701421 1585816 cri.go:96] found id: ""
	I1222 01:11:17.701430 1585816 logs.go:282] 1 containers: [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6]
	I1222 01:11:17.701497 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:11:17.705643 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:11:17.705746 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:11:17.733334 1585816 cri.go:96] found id: ""
	I1222 01:11:17.733364 1585816 logs.go:282] 0 containers: []
	W1222 01:11:17.733374 1585816 logs.go:284] No container was found matching "kindnet"
	I1222 01:11:17.733380 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:11:17.733442 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:11:17.760406 1585816 cri.go:96] found id: ""
	I1222 01:11:17.760431 1585816 logs.go:282] 0 containers: []
	W1222 01:11:17.760441 1585816 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:11:17.760454 1585816 logs.go:123] Gathering logs for kubelet ...
	I1222 01:11:17.760465 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:11:17.823854 1585816 logs.go:123] Gathering logs for kube-apiserver [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c] ...
	I1222 01:11:17.823899 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:11:17.860071 1585816 logs.go:123] Gathering logs for etcd [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d] ...
	I1222 01:11:17.860112 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:11:17.896851 1585816 logs.go:123] Gathering logs for kube-scheduler [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e] ...
	I1222 01:11:17.896893 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:11:17.937437 1585816 logs.go:123] Gathering logs for kube-controller-manager [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6] ...
	I1222 01:11:17.937476 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:11:17.981164 1585816 logs.go:123] Gathering logs for dmesg ...
	I1222 01:11:17.981196 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:11:18.001785 1585816 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:11:18.001820 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:11:18.091947 1585816 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:11:18.091970 1585816 logs.go:123] Gathering logs for containerd ...
	I1222 01:11:18.091984 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:11:18.122563 1585816 logs.go:123] Gathering logs for container status ...
	I1222 01:11:18.122597 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:11:20.654186 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:11:20.666140 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:11:20.666274 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:11:20.703129 1585816 cri.go:96] found id: "57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:11:20.703210 1585816 cri.go:96] found id: ""
	I1222 01:11:20.703233 1585816 logs.go:282] 1 containers: [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c]
	I1222 01:11:20.703323 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:11:20.709119 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:11:20.709243 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:11:20.749993 1585816 cri.go:96] found id: "1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:11:20.750065 1585816 cri.go:96] found id: ""
	I1222 01:11:20.750105 1585816 logs.go:282] 1 containers: [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d]
	I1222 01:11:20.750199 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:11:20.755539 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:11:20.755700 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:11:20.802593 1585816 cri.go:96] found id: ""
	I1222 01:11:20.802709 1585816 logs.go:282] 0 containers: []
	W1222 01:11:20.802737 1585816 logs.go:284] No container was found matching "coredns"
	I1222 01:11:20.802786 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:11:20.802914 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:11:20.849428 1585816 cri.go:96] found id: "ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:11:20.849520 1585816 cri.go:96] found id: ""
	I1222 01:11:20.849553 1585816 logs.go:282] 1 containers: [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e]
	I1222 01:11:20.849691 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:11:20.857039 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:11:20.857240 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:11:20.904444 1585816 cri.go:96] found id: ""
	I1222 01:11:20.904522 1585816 logs.go:282] 0 containers: []
	W1222 01:11:20.904545 1585816 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:11:20.904566 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:11:20.904691 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:11:20.939935 1585816 cri.go:96] found id: "7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:11:20.940013 1585816 cri.go:96] found id: ""
	I1222 01:11:20.940039 1585816 logs.go:282] 1 containers: [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6]
	I1222 01:11:20.940143 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:11:20.944776 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:11:20.944919 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:11:20.980687 1585816 cri.go:96] found id: ""
	I1222 01:11:20.980786 1585816 logs.go:282] 0 containers: []
	W1222 01:11:20.980809 1585816 logs.go:284] No container was found matching "kindnet"
	I1222 01:11:20.980832 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:11:20.980943 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:11:21.076546 1585816 cri.go:96] found id: ""
	I1222 01:11:21.076568 1585816 logs.go:282] 0 containers: []
	W1222 01:11:21.076576 1585816 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:11:21.076590 1585816 logs.go:123] Gathering logs for dmesg ...
	I1222 01:11:21.076616 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:11:21.116527 1585816 logs.go:123] Gathering logs for etcd [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d] ...
	I1222 01:11:21.116642 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:11:21.175390 1585816 logs.go:123] Gathering logs for kubelet ...
	I1222 01:11:21.175462 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:11:21.250778 1585816 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:11:21.250856 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:11:21.329621 1585816 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:11:21.329694 1585816 logs.go:123] Gathering logs for kube-apiserver [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c] ...
	I1222 01:11:21.329724 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:11:21.380094 1585816 logs.go:123] Gathering logs for kube-scheduler [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e] ...
	I1222 01:11:21.380171 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:11:21.420983 1585816 logs.go:123] Gathering logs for kube-controller-manager [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6] ...
	I1222 01:11:21.421086 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:11:21.472852 1585816 logs.go:123] Gathering logs for containerd ...
	I1222 01:11:21.472933 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:11:21.506415 1585816 logs.go:123] Gathering logs for container status ...
	I1222 01:11:21.506501 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:11:24.054329 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:11:24.065850 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:11:24.065932 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:11:24.092483 1585816 cri.go:96] found id: "57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:11:24.092504 1585816 cri.go:96] found id: ""
	I1222 01:11:24.092512 1585816 logs.go:282] 1 containers: [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c]
	I1222 01:11:24.092572 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:11:24.096459 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:11:24.096580 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:11:24.127145 1585816 cri.go:96] found id: "1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:11:24.127174 1585816 cri.go:96] found id: ""
	I1222 01:11:24.127183 1585816 logs.go:282] 1 containers: [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d]
	I1222 01:11:24.127240 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:11:24.130989 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:11:24.131067 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:11:24.156363 1585816 cri.go:96] found id: ""
	I1222 01:11:24.156397 1585816 logs.go:282] 0 containers: []
	W1222 01:11:24.156407 1585816 logs.go:284] No container was found matching "coredns"
	I1222 01:11:24.156413 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:11:24.156475 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:11:24.193433 1585816 cri.go:96] found id: "ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:11:24.193457 1585816 cri.go:96] found id: ""
	I1222 01:11:24.193466 1585816 logs.go:282] 1 containers: [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e]
	I1222 01:11:24.193525 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:11:24.200081 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:11:24.200181 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:11:24.227596 1585816 cri.go:96] found id: ""
	I1222 01:11:24.227626 1585816 logs.go:282] 0 containers: []
	W1222 01:11:24.227635 1585816 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:11:24.227642 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:11:24.227724 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:11:24.253794 1585816 cri.go:96] found id: "7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:11:24.253819 1585816 cri.go:96] found id: ""
	I1222 01:11:24.253828 1585816 logs.go:282] 1 containers: [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6]
	I1222 01:11:24.253907 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:11:24.257620 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:11:24.257723 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:11:24.285040 1585816 cri.go:96] found id: ""
	I1222 01:11:24.285069 1585816 logs.go:282] 0 containers: []
	W1222 01:11:24.285078 1585816 logs.go:284] No container was found matching "kindnet"
	I1222 01:11:24.285088 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:11:24.285154 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:11:24.315098 1585816 cri.go:96] found id: ""
	I1222 01:11:24.315124 1585816 logs.go:282] 0 containers: []
	W1222 01:11:24.315133 1585816 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:11:24.315147 1585816 logs.go:123] Gathering logs for dmesg ...
	I1222 01:11:24.315179 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:11:24.330549 1585816 logs.go:123] Gathering logs for kube-apiserver [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c] ...
	I1222 01:11:24.330578 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:11:24.364815 1585816 logs.go:123] Gathering logs for etcd [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d] ...
	I1222 01:11:24.364848 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:11:24.401402 1585816 logs.go:123] Gathering logs for kube-controller-manager [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6] ...
	I1222 01:11:24.401434 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:11:24.435734 1585816 logs.go:123] Gathering logs for containerd ...
	I1222 01:11:24.435764 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:11:24.465108 1585816 logs.go:123] Gathering logs for kubelet ...
	I1222 01:11:24.465140 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:11:24.528926 1585816 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:11:24.528964 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:11:24.591648 1585816 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:11:24.591668 1585816 logs.go:123] Gathering logs for kube-scheduler [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e] ...
	I1222 01:11:24.591680 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:11:24.625486 1585816 logs.go:123] Gathering logs for container status ...
	I1222 01:11:24.625780 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:11:27.171719 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:11:27.182400 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:11:27.182483 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:11:27.208731 1585816 cri.go:96] found id: "57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:11:27.208754 1585816 cri.go:96] found id: ""
	I1222 01:11:27.208763 1585816 logs.go:282] 1 containers: [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c]
	I1222 01:11:27.208822 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:11:27.212721 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:11:27.212801 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:11:27.239943 1585816 cri.go:96] found id: "1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:11:27.239966 1585816 cri.go:96] found id: ""
	I1222 01:11:27.239975 1585816 logs.go:282] 1 containers: [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d]
	I1222 01:11:27.240033 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:11:27.243942 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:11:27.244021 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:11:27.270775 1585816 cri.go:96] found id: ""
	I1222 01:11:27.270802 1585816 logs.go:282] 0 containers: []
	W1222 01:11:27.270811 1585816 logs.go:284] No container was found matching "coredns"
	I1222 01:11:27.270818 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:11:27.270881 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:11:27.296611 1585816 cri.go:96] found id: "ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:11:27.296634 1585816 cri.go:96] found id: ""
	I1222 01:11:27.296643 1585816 logs.go:282] 1 containers: [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e]
	I1222 01:11:27.296721 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:11:27.300511 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:11:27.300614 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:11:27.325871 1585816 cri.go:96] found id: ""
	I1222 01:11:27.325947 1585816 logs.go:282] 0 containers: []
	W1222 01:11:27.325971 1585816 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:11:27.325994 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:11:27.326138 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:11:27.351438 1585816 cri.go:96] found id: "7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:11:27.351459 1585816 cri.go:96] found id: ""
	I1222 01:11:27.351481 1585816 logs.go:282] 1 containers: [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6]
	I1222 01:11:27.351540 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:11:27.355391 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:11:27.355467 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:11:27.381141 1585816 cri.go:96] found id: ""
	I1222 01:11:27.381171 1585816 logs.go:282] 0 containers: []
	W1222 01:11:27.381180 1585816 logs.go:284] No container was found matching "kindnet"
	I1222 01:11:27.381187 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:11:27.381248 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:11:27.406267 1585816 cri.go:96] found id: ""
	I1222 01:11:27.406301 1585816 logs.go:282] 0 containers: []
	W1222 01:11:27.406311 1585816 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:11:27.406326 1585816 logs.go:123] Gathering logs for dmesg ...
	I1222 01:11:27.406343 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:11:27.421251 1585816 logs.go:123] Gathering logs for kube-apiserver [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c] ...
	I1222 01:11:27.421331 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:11:27.465115 1585816 logs.go:123] Gathering logs for kubelet ...
	I1222 01:11:27.465146 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:11:27.541904 1585816 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:11:27.541943 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:11:27.625972 1585816 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:11:27.625992 1585816 logs.go:123] Gathering logs for etcd [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d] ...
	I1222 01:11:27.626018 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:11:27.658553 1585816 logs.go:123] Gathering logs for kube-scheduler [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e] ...
	I1222 01:11:27.658586 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:11:27.695102 1585816 logs.go:123] Gathering logs for kube-controller-manager [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6] ...
	I1222 01:11:27.695140 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:11:27.735677 1585816 logs.go:123] Gathering logs for containerd ...
	I1222 01:11:27.735708 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:11:27.787092 1585816 logs.go:123] Gathering logs for container status ...
	I1222 01:11:27.787168 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:11:30.343924 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:11:30.354450 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:11:30.354550 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:11:30.384434 1585816 cri.go:96] found id: "57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:11:30.384462 1585816 cri.go:96] found id: ""
	I1222 01:11:30.384471 1585816 logs.go:282] 1 containers: [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c]
	I1222 01:11:30.384535 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:11:30.389229 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:11:30.389309 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:11:30.424765 1585816 cri.go:96] found id: "1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:11:30.424793 1585816 cri.go:96] found id: ""
	I1222 01:11:30.424802 1585816 logs.go:282] 1 containers: [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d]
	I1222 01:11:30.424860 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:11:30.429253 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:11:30.429330 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:11:30.460596 1585816 cri.go:96] found id: ""
	I1222 01:11:30.460625 1585816 logs.go:282] 0 containers: []
	W1222 01:11:30.460634 1585816 logs.go:284] No container was found matching "coredns"
	I1222 01:11:30.460659 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:11:30.460742 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:11:30.494544 1585816 cri.go:96] found id: "ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:11:30.494571 1585816 cri.go:96] found id: ""
	I1222 01:11:30.494579 1585816 logs.go:282] 1 containers: [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e]
	I1222 01:11:30.494663 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:11:30.499205 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:11:30.499324 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:11:30.544497 1585816 cri.go:96] found id: ""
	I1222 01:11:30.544526 1585816 logs.go:282] 0 containers: []
	W1222 01:11:30.544535 1585816 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:11:30.544578 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:11:30.544668 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:11:30.572726 1585816 cri.go:96] found id: "7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:11:30.572753 1585816 cri.go:96] found id: ""
	I1222 01:11:30.572761 1585816 logs.go:282] 1 containers: [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6]
	I1222 01:11:30.572878 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:11:30.577218 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:11:30.577324 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:11:30.607650 1585816 cri.go:96] found id: ""
	I1222 01:11:30.607678 1585816 logs.go:282] 0 containers: []
	W1222 01:11:30.607687 1585816 logs.go:284] No container was found matching "kindnet"
	I1222 01:11:30.607729 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:11:30.607817 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:11:30.635277 1585816 cri.go:96] found id: ""
	I1222 01:11:30.635312 1585816 logs.go:282] 0 containers: []
	W1222 01:11:30.635355 1585816 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:11:30.635377 1585816 logs.go:123] Gathering logs for dmesg ...
	I1222 01:11:30.635394 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:11:30.651387 1585816 logs.go:123] Gathering logs for kube-apiserver [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c] ...
	I1222 01:11:30.651418 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:11:30.702265 1585816 logs.go:123] Gathering logs for containerd ...
	I1222 01:11:30.702299 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:11:30.734600 1585816 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:11:30.737496 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:11:30.879394 1585816 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:11:30.879413 1585816 logs.go:123] Gathering logs for etcd [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d] ...
	I1222 01:11:30.879426 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:11:30.917540 1585816 logs.go:123] Gathering logs for kube-scheduler [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e] ...
	I1222 01:11:30.917626 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:11:30.959867 1585816 logs.go:123] Gathering logs for kube-controller-manager [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6] ...
	I1222 01:11:30.959942 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:11:30.994781 1585816 logs.go:123] Gathering logs for container status ...
	I1222 01:11:30.994853 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:11:31.053281 1585816 logs.go:123] Gathering logs for kubelet ...
	I1222 01:11:31.053312 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:11:33.627203 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:11:33.638426 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:11:33.638495 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:11:33.668748 1585816 cri.go:96] found id: "57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:11:33.668768 1585816 cri.go:96] found id: ""
	I1222 01:11:33.668775 1585816 logs.go:282] 1 containers: [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c]
	I1222 01:11:33.668833 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:11:33.673142 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:11:33.673258 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:11:33.708756 1585816 cri.go:96] found id: "1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:11:33.708777 1585816 cri.go:96] found id: ""
	I1222 01:11:33.708785 1585816 logs.go:282] 1 containers: [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d]
	I1222 01:11:33.708849 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:11:33.714477 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:11:33.714548 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:11:33.749133 1585816 cri.go:96] found id: ""
	I1222 01:11:33.749156 1585816 logs.go:282] 0 containers: []
	W1222 01:11:33.749165 1585816 logs.go:284] No container was found matching "coredns"
	I1222 01:11:33.749172 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:11:33.749234 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:11:33.796772 1585816 cri.go:96] found id: "ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:11:33.796793 1585816 cri.go:96] found id: ""
	I1222 01:11:33.796801 1585816 logs.go:282] 1 containers: [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e]
	I1222 01:11:33.796860 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:11:33.801393 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:11:33.801465 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:11:33.835241 1585816 cri.go:96] found id: ""
	I1222 01:11:33.835324 1585816 logs.go:282] 0 containers: []
	W1222 01:11:33.835347 1585816 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:11:33.835371 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:11:33.835470 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:11:33.872948 1585816 cri.go:96] found id: "7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:11:33.872970 1585816 cri.go:96] found id: ""
	I1222 01:11:33.872978 1585816 logs.go:282] 1 containers: [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6]
	I1222 01:11:33.873046 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:11:33.877753 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:11:33.877878 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:11:33.909259 1585816 cri.go:96] found id: ""
	I1222 01:11:33.909358 1585816 logs.go:282] 0 containers: []
	W1222 01:11:33.909385 1585816 logs.go:284] No container was found matching "kindnet"
	I1222 01:11:33.909404 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:11:33.909497 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:11:33.948708 1585816 cri.go:96] found id: ""
	I1222 01:11:33.948800 1585816 logs.go:282] 0 containers: []
	W1222 01:11:33.948825 1585816 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:11:33.948867 1585816 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:11:33.948896 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:11:34.073858 1585816 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:11:34.073932 1585816 logs.go:123] Gathering logs for kube-apiserver [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c] ...
	I1222 01:11:34.073965 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:11:34.165031 1585816 logs.go:123] Gathering logs for kube-scheduler [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e] ...
	I1222 01:11:34.165117 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:11:34.213350 1585816 logs.go:123] Gathering logs for containerd ...
	I1222 01:11:34.213435 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:11:34.244501 1585816 logs.go:123] Gathering logs for container status ...
	I1222 01:11:34.244578 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:11:34.280686 1585816 logs.go:123] Gathering logs for kubelet ...
	I1222 01:11:34.280721 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:11:34.345141 1585816 logs.go:123] Gathering logs for dmesg ...
	I1222 01:11:34.345176 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:11:34.361767 1585816 logs.go:123] Gathering logs for etcd [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d] ...
	I1222 01:11:34.361795 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:11:34.405090 1585816 logs.go:123] Gathering logs for kube-controller-manager [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6] ...
	I1222 01:11:34.405122 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:11:36.972281 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:11:36.983173 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:11:36.983246 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:11:37.023460 1585816 cri.go:96] found id: "57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:11:37.023484 1585816 cri.go:96] found id: ""
	I1222 01:11:37.023492 1585816 logs.go:282] 1 containers: [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c]
	I1222 01:11:37.023561 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:11:37.028605 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:11:37.028740 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:11:37.055468 1585816 cri.go:96] found id: "1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:11:37.055492 1585816 cri.go:96] found id: ""
	I1222 01:11:37.055501 1585816 logs.go:282] 1 containers: [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d]
	I1222 01:11:37.055558 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:11:37.059492 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:11:37.059576 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:11:37.085224 1585816 cri.go:96] found id: ""
	I1222 01:11:37.085301 1585816 logs.go:282] 0 containers: []
	W1222 01:11:37.085325 1585816 logs.go:284] No container was found matching "coredns"
	I1222 01:11:37.085350 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:11:37.085463 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:11:37.113098 1585816 cri.go:96] found id: "ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:11:37.113120 1585816 cri.go:96] found id: ""
	I1222 01:11:37.113129 1585816 logs.go:282] 1 containers: [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e]
	I1222 01:11:37.113189 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:11:37.116988 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:11:37.117079 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:11:37.143081 1585816 cri.go:96] found id: ""
	I1222 01:11:37.143106 1585816 logs.go:282] 0 containers: []
	W1222 01:11:37.143115 1585816 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:11:37.143122 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:11:37.143211 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:11:37.169999 1585816 cri.go:96] found id: "7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:11:37.170024 1585816 cri.go:96] found id: ""
	I1222 01:11:37.170033 1585816 logs.go:282] 1 containers: [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6]
	I1222 01:11:37.170136 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:11:37.174150 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:11:37.174256 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:11:37.202571 1585816 cri.go:96] found id: ""
	I1222 01:11:37.202602 1585816 logs.go:282] 0 containers: []
	W1222 01:11:37.202611 1585816 logs.go:284] No container was found matching "kindnet"
	I1222 01:11:37.202617 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:11:37.202706 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:11:37.234391 1585816 cri.go:96] found id: ""
	I1222 01:11:37.234419 1585816 logs.go:282] 0 containers: []
	W1222 01:11:37.234428 1585816 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:11:37.234495 1585816 logs.go:123] Gathering logs for dmesg ...
	I1222 01:11:37.234515 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:11:37.252584 1585816 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:11:37.252613 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:11:37.329914 1585816 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:11:37.329936 1585816 logs.go:123] Gathering logs for kube-apiserver [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c] ...
	I1222 01:11:37.329951 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:11:37.364189 1585816 logs.go:123] Gathering logs for kube-scheduler [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e] ...
	I1222 01:11:37.364225 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:11:37.402056 1585816 logs.go:123] Gathering logs for kube-controller-manager [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6] ...
	I1222 01:11:37.402093 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:11:37.437096 1585816 logs.go:123] Gathering logs for containerd ...
	I1222 01:11:37.437128 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:11:37.467874 1585816 logs.go:123] Gathering logs for kubelet ...
	I1222 01:11:37.467909 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:11:37.527034 1585816 logs.go:123] Gathering logs for etcd [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d] ...
	I1222 01:11:37.527069 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:11:37.559688 1585816 logs.go:123] Gathering logs for container status ...
	I1222 01:11:37.559721 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:11:40.091478 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:11:40.102978 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:11:40.103052 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:11:40.133699 1585816 cri.go:96] found id: "57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:11:40.133726 1585816 cri.go:96] found id: ""
	I1222 01:11:40.133742 1585816 logs.go:282] 1 containers: [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c]
	I1222 01:11:40.133808 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:11:40.138486 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:11:40.138573 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:11:40.165871 1585816 cri.go:96] found id: "1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:11:40.165898 1585816 cri.go:96] found id: ""
	I1222 01:11:40.165908 1585816 logs.go:282] 1 containers: [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d]
	I1222 01:11:40.165965 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:11:40.170244 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:11:40.170324 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:11:40.196669 1585816 cri.go:96] found id: ""
	I1222 01:11:40.196756 1585816 logs.go:282] 0 containers: []
	W1222 01:11:40.196790 1585816 logs.go:284] No container was found matching "coredns"
	I1222 01:11:40.196803 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:11:40.196879 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:11:40.224346 1585816 cri.go:96] found id: "ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:11:40.224372 1585816 cri.go:96] found id: ""
	I1222 01:11:40.224381 1585816 logs.go:282] 1 containers: [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e]
	I1222 01:11:40.224453 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:11:40.229394 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:11:40.229491 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:11:40.264805 1585816 cri.go:96] found id: ""
	I1222 01:11:40.264841 1585816 logs.go:282] 0 containers: []
	W1222 01:11:40.264852 1585816 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:11:40.264859 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:11:40.264938 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:11:40.306516 1585816 cri.go:96] found id: "7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:11:40.306536 1585816 cri.go:96] found id: ""
	I1222 01:11:40.306544 1585816 logs.go:282] 1 containers: [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6]
	I1222 01:11:40.306605 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:11:40.311917 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:11:40.311992 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:11:40.338065 1585816 cri.go:96] found id: ""
	I1222 01:11:40.338131 1585816 logs.go:282] 0 containers: []
	W1222 01:11:40.338141 1585816 logs.go:284] No container was found matching "kindnet"
	I1222 01:11:40.338149 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:11:40.338215 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:11:40.364380 1585816 cri.go:96] found id: ""
	I1222 01:11:40.364428 1585816 logs.go:282] 0 containers: []
	W1222 01:11:40.364437 1585816 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:11:40.364449 1585816 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:11:40.364461 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:11:40.431316 1585816 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:11:40.431336 1585816 logs.go:123] Gathering logs for etcd [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d] ...
	I1222 01:11:40.431349 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:11:40.469590 1585816 logs.go:123] Gathering logs for kubelet ...
	I1222 01:11:40.469623 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:11:40.530742 1585816 logs.go:123] Gathering logs for kube-apiserver [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c] ...
	I1222 01:11:40.530797 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:11:40.565385 1585816 logs.go:123] Gathering logs for kube-scheduler [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e] ...
	I1222 01:11:40.565419 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:11:40.600837 1585816 logs.go:123] Gathering logs for kube-controller-manager [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6] ...
	I1222 01:11:40.600867 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:11:40.634966 1585816 logs.go:123] Gathering logs for containerd ...
	I1222 01:11:40.635002 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:11:40.665933 1585816 logs.go:123] Gathering logs for container status ...
	I1222 01:11:40.665966 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:11:40.702793 1585816 logs.go:123] Gathering logs for dmesg ...
	I1222 01:11:40.702824 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:11:43.219206 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:11:43.229914 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:11:43.229987 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:11:43.259816 1585816 cri.go:96] found id: "57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:11:43.259841 1585816 cri.go:96] found id: ""
	I1222 01:11:43.259850 1585816 logs.go:282] 1 containers: [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c]
	I1222 01:11:43.259913 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:11:43.264555 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:11:43.264639 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:11:43.293578 1585816 cri.go:96] found id: "1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:11:43.293604 1585816 cri.go:96] found id: ""
	I1222 01:11:43.293613 1585816 logs.go:282] 1 containers: [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d]
	I1222 01:11:43.293669 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:11:43.298602 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:11:43.298673 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:11:43.331742 1585816 cri.go:96] found id: ""
	I1222 01:11:43.331771 1585816 logs.go:282] 0 containers: []
	W1222 01:11:43.331780 1585816 logs.go:284] No container was found matching "coredns"
	I1222 01:11:43.331787 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:11:43.331873 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:11:43.357825 1585816 cri.go:96] found id: "ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:11:43.357848 1585816 cri.go:96] found id: ""
	I1222 01:11:43.357860 1585816 logs.go:282] 1 containers: [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e]
	I1222 01:11:43.357940 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:11:43.361851 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:11:43.361954 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:11:43.389138 1585816 cri.go:96] found id: ""
	I1222 01:11:43.389164 1585816 logs.go:282] 0 containers: []
	W1222 01:11:43.389173 1585816 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:11:43.389180 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:11:43.389242 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:11:43.416775 1585816 cri.go:96] found id: "7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:11:43.416798 1585816 cri.go:96] found id: ""
	I1222 01:11:43.416808 1585816 logs.go:282] 1 containers: [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6]
	I1222 01:11:43.416913 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:11:43.420666 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:11:43.420741 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:11:43.444624 1585816 cri.go:96] found id: ""
	I1222 01:11:43.444658 1585816 logs.go:282] 0 containers: []
	W1222 01:11:43.444667 1585816 logs.go:284] No container was found matching "kindnet"
	I1222 01:11:43.444674 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:11:43.444743 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:11:43.478116 1585816 cri.go:96] found id: ""
	I1222 01:11:43.478145 1585816 logs.go:282] 0 containers: []
	W1222 01:11:43.478155 1585816 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:11:43.478170 1585816 logs.go:123] Gathering logs for dmesg ...
	I1222 01:11:43.478194 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:11:43.493454 1585816 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:11:43.493484 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:11:43.565552 1585816 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:11:43.565574 1585816 logs.go:123] Gathering logs for kube-scheduler [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e] ...
	I1222 01:11:43.565588 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:11:43.600187 1585816 logs.go:123] Gathering logs for kube-controller-manager [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6] ...
	I1222 01:11:43.600222 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:11:43.642848 1585816 logs.go:123] Gathering logs for kubelet ...
	I1222 01:11:43.642881 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:11:43.703504 1585816 logs.go:123] Gathering logs for kube-apiserver [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c] ...
	I1222 01:11:43.703542 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:11:43.739673 1585816 logs.go:123] Gathering logs for etcd [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d] ...
	I1222 01:11:43.739711 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:11:43.773520 1585816 logs.go:123] Gathering logs for containerd ...
	I1222 01:11:43.773553 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:11:43.802135 1585816 logs.go:123] Gathering logs for container status ...
	I1222 01:11:43.802166 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:11:46.344764 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:11:46.355280 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:11:46.355354 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:11:46.381887 1585816 cri.go:96] found id: "57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:11:46.381910 1585816 cri.go:96] found id: ""
	I1222 01:11:46.381919 1585816 logs.go:282] 1 containers: [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c]
	I1222 01:11:46.382000 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:11:46.385902 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:11:46.386013 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:11:46.411560 1585816 cri.go:96] found id: "1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:11:46.411628 1585816 cri.go:96] found id: ""
	I1222 01:11:46.411651 1585816 logs.go:282] 1 containers: [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d]
	I1222 01:11:46.411733 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:11:46.415637 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:11:46.415720 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:11:46.444801 1585816 cri.go:96] found id: ""
	I1222 01:11:46.444825 1585816 logs.go:282] 0 containers: []
	W1222 01:11:46.444833 1585816 logs.go:284] No container was found matching "coredns"
	I1222 01:11:46.444840 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:11:46.444963 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:11:46.475552 1585816 cri.go:96] found id: "ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:11:46.475618 1585816 cri.go:96] found id: ""
	I1222 01:11:46.475641 1585816 logs.go:282] 1 containers: [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e]
	I1222 01:11:46.475707 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:11:46.479795 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:11:46.479882 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:11:46.505098 1585816 cri.go:96] found id: ""
	I1222 01:11:46.505126 1585816 logs.go:282] 0 containers: []
	W1222 01:11:46.505135 1585816 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:11:46.505143 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:11:46.505206 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:11:46.534730 1585816 cri.go:96] found id: "7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:11:46.534792 1585816 cri.go:96] found id: ""
	I1222 01:11:46.534802 1585816 logs.go:282] 1 containers: [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6]
	I1222 01:11:46.534899 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:11:46.539824 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:11:46.539949 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:11:46.578137 1585816 cri.go:96] found id: ""
	I1222 01:11:46.578168 1585816 logs.go:282] 0 containers: []
	W1222 01:11:46.578177 1585816 logs.go:284] No container was found matching "kindnet"
	I1222 01:11:46.578185 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:11:46.578248 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:11:46.605397 1585816 cri.go:96] found id: ""
	I1222 01:11:46.605477 1585816 logs.go:282] 0 containers: []
	W1222 01:11:46.605501 1585816 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:11:46.605529 1585816 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:11:46.605568 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:11:46.672546 1585816 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:11:46.672570 1585816 logs.go:123] Gathering logs for kube-apiserver [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c] ...
	I1222 01:11:46.672584 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:11:46.707373 1585816 logs.go:123] Gathering logs for etcd [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d] ...
	I1222 01:11:46.707408 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:11:46.741245 1585816 logs.go:123] Gathering logs for kube-scheduler [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e] ...
	I1222 01:11:46.741278 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:11:46.779852 1585816 logs.go:123] Gathering logs for kube-controller-manager [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6] ...
	I1222 01:11:46.779885 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:11:46.813502 1585816 logs.go:123] Gathering logs for containerd ...
	I1222 01:11:46.813532 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:11:46.843109 1585816 logs.go:123] Gathering logs for container status ...
	I1222 01:11:46.843141 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:11:46.875107 1585816 logs.go:123] Gathering logs for kubelet ...
	I1222 01:11:46.875188 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:11:46.933230 1585816 logs.go:123] Gathering logs for dmesg ...
	I1222 01:11:46.933271 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:11:49.449313 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:11:49.466394 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:11:49.466471 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:11:49.494368 1585816 cri.go:96] found id: "57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:11:49.494444 1585816 cri.go:96] found id: ""
	I1222 01:11:49.494461 1585816 logs.go:282] 1 containers: [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c]
	I1222 01:11:49.494536 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:11:49.498400 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:11:49.498477 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:11:49.523896 1585816 cri.go:96] found id: "1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:11:49.523925 1585816 cri.go:96] found id: ""
	I1222 01:11:49.523934 1585816 logs.go:282] 1 containers: [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d]
	I1222 01:11:49.523996 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:11:49.527960 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:11:49.528073 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:11:49.555495 1585816 cri.go:96] found id: ""
	I1222 01:11:49.555583 1585816 logs.go:282] 0 containers: []
	W1222 01:11:49.555606 1585816 logs.go:284] No container was found matching "coredns"
	I1222 01:11:49.555629 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:11:49.555733 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:11:49.583147 1585816 cri.go:96] found id: "ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:11:49.583182 1585816 cri.go:96] found id: ""
	I1222 01:11:49.583191 1585816 logs.go:282] 1 containers: [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e]
	I1222 01:11:49.583275 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:11:49.587273 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:11:49.587373 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:11:49.613359 1585816 cri.go:96] found id: ""
	I1222 01:11:49.613396 1585816 logs.go:282] 0 containers: []
	W1222 01:11:49.613405 1585816 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:11:49.613412 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:11:49.613482 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:11:49.643243 1585816 cri.go:96] found id: "7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:11:49.643267 1585816 cri.go:96] found id: ""
	I1222 01:11:49.643276 1585816 logs.go:282] 1 containers: [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6]
	I1222 01:11:49.643335 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:11:49.647269 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:11:49.647349 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:11:49.675851 1585816 cri.go:96] found id: ""
	I1222 01:11:49.675931 1585816 logs.go:282] 0 containers: []
	W1222 01:11:49.675945 1585816 logs.go:284] No container was found matching "kindnet"
	I1222 01:11:49.675953 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:11:49.676033 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:11:49.701176 1585816 cri.go:96] found id: ""
	I1222 01:11:49.701200 1585816 logs.go:282] 0 containers: []
	W1222 01:11:49.701208 1585816 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:11:49.701221 1585816 logs.go:123] Gathering logs for kubelet ...
	I1222 01:11:49.701233 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:11:49.760002 1585816 logs.go:123] Gathering logs for dmesg ...
	I1222 01:11:49.760041 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:11:49.775746 1585816 logs.go:123] Gathering logs for etcd [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d] ...
	I1222 01:11:49.775775 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:11:49.812411 1585816 logs.go:123] Gathering logs for kube-controller-manager [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6] ...
	I1222 01:11:49.812457 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:11:49.848755 1585816 logs.go:123] Gathering logs for containerd ...
	I1222 01:11:49.848795 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:11:49.878808 1585816 logs.go:123] Gathering logs for container status ...
	I1222 01:11:49.878843 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:11:49.916031 1585816 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:11:49.916065 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:11:49.982966 1585816 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:11:49.982990 1585816 logs.go:123] Gathering logs for kube-apiserver [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c] ...
	I1222 01:11:49.983003 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:11:50.029330 1585816 logs.go:123] Gathering logs for kube-scheduler [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e] ...
	I1222 01:11:50.029370 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:11:52.583762 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:11:52.594301 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:11:52.594434 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:11:52.618899 1585816 cri.go:96] found id: "57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:11:52.618924 1585816 cri.go:96] found id: ""
	I1222 01:11:52.618933 1585816 logs.go:282] 1 containers: [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c]
	I1222 01:11:52.619024 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:11:52.622972 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:11:52.623068 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:11:52.649687 1585816 cri.go:96] found id: "1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:11:52.649712 1585816 cri.go:96] found id: ""
	I1222 01:11:52.649721 1585816 logs.go:282] 1 containers: [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d]
	I1222 01:11:52.649803 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:11:52.654014 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:11:52.654140 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:11:52.683164 1585816 cri.go:96] found id: ""
	I1222 01:11:52.683191 1585816 logs.go:282] 0 containers: []
	W1222 01:11:52.683201 1585816 logs.go:284] No container was found matching "coredns"
	I1222 01:11:52.683208 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:11:52.683270 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:11:52.709597 1585816 cri.go:96] found id: "ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:11:52.709616 1585816 cri.go:96] found id: ""
	I1222 01:11:52.709625 1585816 logs.go:282] 1 containers: [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e]
	I1222 01:11:52.709682 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:11:52.713526 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:11:52.713602 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:11:52.738623 1585816 cri.go:96] found id: ""
	I1222 01:11:52.738649 1585816 logs.go:282] 0 containers: []
	W1222 01:11:52.738657 1585816 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:11:52.738665 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:11:52.738728 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:11:52.765761 1585816 cri.go:96] found id: "7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:11:52.765785 1585816 cri.go:96] found id: ""
	I1222 01:11:52.765793 1585816 logs.go:282] 1 containers: [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6]
	I1222 01:11:52.765851 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:11:52.769729 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:11:52.769808 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:11:52.796597 1585816 cri.go:96] found id: ""
	I1222 01:11:52.796624 1585816 logs.go:282] 0 containers: []
	W1222 01:11:52.796632 1585816 logs.go:284] No container was found matching "kindnet"
	I1222 01:11:52.796639 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:11:52.796700 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:11:52.821672 1585816 cri.go:96] found id: ""
	I1222 01:11:52.821701 1585816 logs.go:282] 0 containers: []
	W1222 01:11:52.821710 1585816 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:11:52.821724 1585816 logs.go:123] Gathering logs for dmesg ...
	I1222 01:11:52.821736 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:11:52.837056 1585816 logs.go:123] Gathering logs for etcd [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d] ...
	I1222 01:11:52.837130 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:11:52.870566 1585816 logs.go:123] Gathering logs for kube-scheduler [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e] ...
	I1222 01:11:52.870603 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:11:52.905659 1585816 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:11:52.905692 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:11:52.980186 1585816 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:11:52.980262 1585816 logs.go:123] Gathering logs for kube-apiserver [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c] ...
	I1222 01:11:52.980292 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:11:53.036829 1585816 logs.go:123] Gathering logs for kube-controller-manager [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6] ...
	I1222 01:11:53.036908 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:11:53.082925 1585816 logs.go:123] Gathering logs for containerd ...
	I1222 01:11:53.082959 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:11:53.112000 1585816 logs.go:123] Gathering logs for container status ...
	I1222 01:11:53.112038 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:11:53.145356 1585816 logs.go:123] Gathering logs for kubelet ...
	I1222 01:11:53.145470 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:11:55.703821 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:11:55.714725 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:11:55.714807 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:11:55.744739 1585816 cri.go:96] found id: "57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:11:55.744814 1585816 cri.go:96] found id: ""
	I1222 01:11:55.744838 1585816 logs.go:282] 1 containers: [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c]
	I1222 01:11:55.744927 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:11:55.748906 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:11:55.749035 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:11:55.775646 1585816 cri.go:96] found id: "1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:11:55.775677 1585816 cri.go:96] found id: ""
	I1222 01:11:55.775686 1585816 logs.go:282] 1 containers: [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d]
	I1222 01:11:55.775748 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:11:55.780046 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:11:55.780121 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:11:55.808249 1585816 cri.go:96] found id: ""
	I1222 01:11:55.808277 1585816 logs.go:282] 0 containers: []
	W1222 01:11:55.808285 1585816 logs.go:284] No container was found matching "coredns"
	I1222 01:11:55.808292 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:11:55.808367 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:11:55.835583 1585816 cri.go:96] found id: "ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:11:55.835617 1585816 cri.go:96] found id: ""
	I1222 01:11:55.835627 1585816 logs.go:282] 1 containers: [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e]
	I1222 01:11:55.835695 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:11:55.839649 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:11:55.839725 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:11:55.866041 1585816 cri.go:96] found id: ""
	I1222 01:11:55.866068 1585816 logs.go:282] 0 containers: []
	W1222 01:11:55.866104 1585816 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:11:55.866113 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:11:55.866171 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:11:55.892552 1585816 cri.go:96] found id: "7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:11:55.892632 1585816 cri.go:96] found id: ""
	I1222 01:11:55.892655 1585816 logs.go:282] 1 containers: [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6]
	I1222 01:11:55.892747 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:11:55.896575 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:11:55.896698 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:11:55.922764 1585816 cri.go:96] found id: ""
	I1222 01:11:55.922865 1585816 logs.go:282] 0 containers: []
	W1222 01:11:55.922898 1585816 logs.go:284] No container was found matching "kindnet"
	I1222 01:11:55.922914 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:11:55.923022 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:11:55.955731 1585816 cri.go:96] found id: ""
	I1222 01:11:55.955767 1585816 logs.go:282] 0 containers: []
	W1222 01:11:55.955778 1585816 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:11:55.955791 1585816 logs.go:123] Gathering logs for kube-scheduler [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e] ...
	I1222 01:11:55.955806 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:11:56.002995 1585816 logs.go:123] Gathering logs for kube-controller-manager [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6] ...
	I1222 01:11:56.003042 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:11:56.051476 1585816 logs.go:123] Gathering logs for containerd ...
	I1222 01:11:56.051509 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:11:56.085313 1585816 logs.go:123] Gathering logs for kubelet ...
	I1222 01:11:56.085356 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:11:56.146692 1585816 logs.go:123] Gathering logs for dmesg ...
	I1222 01:11:56.146731 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:11:56.162730 1585816 logs.go:123] Gathering logs for kube-apiserver [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c] ...
	I1222 01:11:56.162760 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:11:56.198561 1585816 logs.go:123] Gathering logs for container status ...
	I1222 01:11:56.198598 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:11:56.232890 1585816 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:11:56.232920 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:11:56.299321 1585816 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:11:56.299388 1585816 logs.go:123] Gathering logs for etcd [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d] ...
	I1222 01:11:56.299416 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:11:58.834742 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:11:58.845257 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:11:58.845338 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:11:58.878170 1585816 cri.go:96] found id: "57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:11:58.878194 1585816 cri.go:96] found id: ""
	I1222 01:11:58.878203 1585816 logs.go:282] 1 containers: [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c]
	I1222 01:11:58.878279 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:11:58.882252 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:11:58.882323 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:11:58.907363 1585816 cri.go:96] found id: "1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:11:58.907378 1585816 cri.go:96] found id: ""
	I1222 01:11:58.907386 1585816 logs.go:282] 1 containers: [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d]
	I1222 01:11:58.907437 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:11:58.911794 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:11:58.911867 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:11:58.939331 1585816 cri.go:96] found id: ""
	I1222 01:11:58.939357 1585816 logs.go:282] 0 containers: []
	W1222 01:11:58.939366 1585816 logs.go:284] No container was found matching "coredns"
	I1222 01:11:58.939372 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:11:58.939457 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:11:58.969320 1585816 cri.go:96] found id: "ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:11:58.969342 1585816 cri.go:96] found id: ""
	I1222 01:11:58.969350 1585816 logs.go:282] 1 containers: [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e]
	I1222 01:11:58.969407 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:11:58.973254 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:11:58.973335 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:11:59.007303 1585816 cri.go:96] found id: ""
	I1222 01:11:59.007330 1585816 logs.go:282] 0 containers: []
	W1222 01:11:59.007339 1585816 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:11:59.007350 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:11:59.007416 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:11:59.057925 1585816 cri.go:96] found id: "7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:11:59.057944 1585816 cri.go:96] found id: ""
	I1222 01:11:59.057953 1585816 logs.go:282] 1 containers: [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6]
	I1222 01:11:59.058016 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:11:59.063490 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:11:59.063565 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:11:59.089898 1585816 cri.go:96] found id: ""
	I1222 01:11:59.089961 1585816 logs.go:282] 0 containers: []
	W1222 01:11:59.089992 1585816 logs.go:284] No container was found matching "kindnet"
	I1222 01:11:59.090011 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:11:59.090134 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:11:59.115616 1585816 cri.go:96] found id: ""
	I1222 01:11:59.115642 1585816 logs.go:282] 0 containers: []
	W1222 01:11:59.115651 1585816 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:11:59.115685 1585816 logs.go:123] Gathering logs for container status ...
	I1222 01:11:59.115706 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:11:59.145578 1585816 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:11:59.145652 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:11:59.216300 1585816 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:11:59.216320 1585816 logs.go:123] Gathering logs for kube-apiserver [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c] ...
	I1222 01:11:59.216332 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:11:59.251318 1585816 logs.go:123] Gathering logs for etcd [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d] ...
	I1222 01:11:59.251352 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:11:59.285135 1585816 logs.go:123] Gathering logs for kube-scheduler [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e] ...
	I1222 01:11:59.285168 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:11:59.319659 1585816 logs.go:123] Gathering logs for kube-controller-manager [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6] ...
	I1222 01:11:59.319693 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:11:59.353334 1585816 logs.go:123] Gathering logs for kubelet ...
	I1222 01:11:59.353368 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:11:59.413369 1585816 logs.go:123] Gathering logs for dmesg ...
	I1222 01:11:59.413402 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:11:59.428255 1585816 logs.go:123] Gathering logs for containerd ...
	I1222 01:11:59.428285 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:12:01.962747 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:12:01.974194 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:12:01.974272 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:12:02.017554 1585816 cri.go:96] found id: "57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:12:02.017581 1585816 cri.go:96] found id: ""
	I1222 01:12:02.017589 1585816 logs.go:282] 1 containers: [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c]
	I1222 01:12:02.017653 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:12:02.022467 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:12:02.022545 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:12:02.052056 1585816 cri.go:96] found id: "1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:12:02.052076 1585816 cri.go:96] found id: ""
	I1222 01:12:02.052084 1585816 logs.go:282] 1 containers: [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d]
	I1222 01:12:02.052146 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:12:02.056409 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:12:02.056502 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:12:02.084986 1585816 cri.go:96] found id: ""
	I1222 01:12:02.085055 1585816 logs.go:282] 0 containers: []
	W1222 01:12:02.085076 1585816 logs.go:284] No container was found matching "coredns"
	I1222 01:12:02.085098 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:12:02.085190 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:12:02.112312 1585816 cri.go:96] found id: "ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:12:02.112396 1585816 cri.go:96] found id: ""
	I1222 01:12:02.112418 1585816 logs.go:282] 1 containers: [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e]
	I1222 01:12:02.112491 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:12:02.116674 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:12:02.116799 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:12:02.143055 1585816 cri.go:96] found id: ""
	I1222 01:12:02.143086 1585816 logs.go:282] 0 containers: []
	W1222 01:12:02.143095 1585816 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:12:02.143101 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:12:02.143163 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:12:02.170334 1585816 cri.go:96] found id: "7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:12:02.170358 1585816 cri.go:96] found id: ""
	I1222 01:12:02.170373 1585816 logs.go:282] 1 containers: [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6]
	I1222 01:12:02.170445 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:12:02.174602 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:12:02.174709 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:12:02.202610 1585816 cri.go:96] found id: ""
	I1222 01:12:02.202640 1585816 logs.go:282] 0 containers: []
	W1222 01:12:02.202650 1585816 logs.go:284] No container was found matching "kindnet"
	I1222 01:12:02.202657 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:12:02.202741 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:12:02.243651 1585816 cri.go:96] found id: ""
	I1222 01:12:02.243675 1585816 logs.go:282] 0 containers: []
	W1222 01:12:02.243684 1585816 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:12:02.243716 1585816 logs.go:123] Gathering logs for kube-scheduler [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e] ...
	I1222 01:12:02.243739 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:12:02.279581 1585816 logs.go:123] Gathering logs for container status ...
	I1222 01:12:02.279611 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:12:02.312576 1585816 logs.go:123] Gathering logs for kubelet ...
	I1222 01:12:02.312605 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:12:02.373302 1585816 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:12:02.373340 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:12:02.439324 1585816 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:12:02.439345 1585816 logs.go:123] Gathering logs for kube-apiserver [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c] ...
	I1222 01:12:02.439359 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:12:02.493921 1585816 logs.go:123] Gathering logs for kube-controller-manager [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6] ...
	I1222 01:12:02.493953 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:12:02.530415 1585816 logs.go:123] Gathering logs for containerd ...
	I1222 01:12:02.530449 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:12:02.560800 1585816 logs.go:123] Gathering logs for dmesg ...
	I1222 01:12:02.560837 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:12:02.576734 1585816 logs.go:123] Gathering logs for etcd [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d] ...
	I1222 01:12:02.576765 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:12:05.111127 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:12:05.122053 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:12:05.122149 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:12:05.148730 1585816 cri.go:96] found id: "57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:12:05.148756 1585816 cri.go:96] found id: ""
	I1222 01:12:05.148766 1585816 logs.go:282] 1 containers: [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c]
	I1222 01:12:05.148828 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:12:05.153172 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:12:05.153254 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:12:05.180645 1585816 cri.go:96] found id: "1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:12:05.180672 1585816 cri.go:96] found id: ""
	I1222 01:12:05.180682 1585816 logs.go:282] 1 containers: [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d]
	I1222 01:12:05.180744 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:12:05.184875 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:12:05.184954 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:12:05.217082 1585816 cri.go:96] found id: ""
	I1222 01:12:05.217106 1585816 logs.go:282] 0 containers: []
	W1222 01:12:05.217115 1585816 logs.go:284] No container was found matching "coredns"
	I1222 01:12:05.217122 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:12:05.217184 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:12:05.243838 1585816 cri.go:96] found id: "ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:12:05.243861 1585816 cri.go:96] found id: ""
	I1222 01:12:05.243870 1585816 logs.go:282] 1 containers: [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e]
	I1222 01:12:05.243951 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:12:05.247951 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:12:05.248056 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:12:05.274583 1585816 cri.go:96] found id: ""
	I1222 01:12:05.274608 1585816 logs.go:282] 0 containers: []
	W1222 01:12:05.274616 1585816 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:12:05.274624 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:12:05.274687 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:12:05.300299 1585816 cri.go:96] found id: "7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:12:05.300330 1585816 cri.go:96] found id: ""
	I1222 01:12:05.300340 1585816 logs.go:282] 1 containers: [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6]
	I1222 01:12:05.300413 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:12:05.304368 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:12:05.304507 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:12:05.329426 1585816 cri.go:96] found id: ""
	I1222 01:12:05.329453 1585816 logs.go:282] 0 containers: []
	W1222 01:12:05.329468 1585816 logs.go:284] No container was found matching "kindnet"
	I1222 01:12:05.329475 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:12:05.329539 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:12:05.355710 1585816 cri.go:96] found id: ""
	I1222 01:12:05.355737 1585816 logs.go:282] 0 containers: []
	W1222 01:12:05.355747 1585816 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:12:05.355780 1585816 logs.go:123] Gathering logs for kubelet ...
	I1222 01:12:05.355800 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:12:05.413607 1585816 logs.go:123] Gathering logs for dmesg ...
	I1222 01:12:05.413643 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:12:05.429608 1585816 logs.go:123] Gathering logs for etcd [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d] ...
	I1222 01:12:05.429690 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:12:05.469599 1585816 logs.go:123] Gathering logs for containerd ...
	I1222 01:12:05.469630 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:12:05.499946 1585816 logs.go:123] Gathering logs for container status ...
	I1222 01:12:05.499980 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:12:05.530696 1585816 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:12:05.530728 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:12:05.598331 1585816 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:12:05.598358 1585816 logs.go:123] Gathering logs for kube-apiserver [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c] ...
	I1222 01:12:05.598373 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:12:05.633627 1585816 logs.go:123] Gathering logs for kube-scheduler [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e] ...
	I1222 01:12:05.633660 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:12:05.673575 1585816 logs.go:123] Gathering logs for kube-controller-manager [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6] ...
	I1222 01:12:05.673607 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:12:08.208552 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:12:08.219261 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:12:08.219338 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:12:08.248279 1585816 cri.go:96] found id: "57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:12:08.248311 1585816 cri.go:96] found id: ""
	I1222 01:12:08.248321 1585816 logs.go:282] 1 containers: [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c]
	I1222 01:12:08.248412 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:12:08.252738 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:12:08.252818 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:12:08.279156 1585816 cri.go:96] found id: "1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:12:08.279181 1585816 cri.go:96] found id: ""
	I1222 01:12:08.279190 1585816 logs.go:282] 1 containers: [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d]
	I1222 01:12:08.279249 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:12:08.283455 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:12:08.283565 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:12:08.310405 1585816 cri.go:96] found id: ""
	I1222 01:12:08.310432 1585816 logs.go:282] 0 containers: []
	W1222 01:12:08.310442 1585816 logs.go:284] No container was found matching "coredns"
	I1222 01:12:08.310449 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:12:08.310510 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:12:08.338209 1585816 cri.go:96] found id: "ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:12:08.338233 1585816 cri.go:96] found id: ""
	I1222 01:12:08.338242 1585816 logs.go:282] 1 containers: [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e]
	I1222 01:12:08.338321 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:12:08.342519 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:12:08.342603 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:12:08.367975 1585816 cri.go:96] found id: ""
	I1222 01:12:08.368002 1585816 logs.go:282] 0 containers: []
	W1222 01:12:08.368011 1585816 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:12:08.368018 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:12:08.368079 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:12:08.397038 1585816 cri.go:96] found id: "7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:12:08.397062 1585816 cri.go:96] found id: ""
	I1222 01:12:08.397071 1585816 logs.go:282] 1 containers: [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6]
	I1222 01:12:08.397152 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:12:08.400886 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:12:08.400962 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:12:08.426826 1585816 cri.go:96] found id: ""
	I1222 01:12:08.426852 1585816 logs.go:282] 0 containers: []
	W1222 01:12:08.426862 1585816 logs.go:284] No container was found matching "kindnet"
	I1222 01:12:08.426869 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:12:08.426930 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:12:08.459921 1585816 cri.go:96] found id: ""
	I1222 01:12:08.459949 1585816 logs.go:282] 0 containers: []
	W1222 01:12:08.459959 1585816 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:12:08.459972 1585816 logs.go:123] Gathering logs for kubelet ...
	I1222 01:12:08.459984 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:12:08.519754 1585816 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:12:08.519796 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:12:08.590360 1585816 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:12:08.590379 1585816 logs.go:123] Gathering logs for etcd [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d] ...
	I1222 01:12:08.590392 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:12:08.624769 1585816 logs.go:123] Gathering logs for containerd ...
	I1222 01:12:08.624803 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:12:08.654521 1585816 logs.go:123] Gathering logs for dmesg ...
	I1222 01:12:08.654558 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:12:08.670459 1585816 logs.go:123] Gathering logs for kube-apiserver [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c] ...
	I1222 01:12:08.670489 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:12:08.705786 1585816 logs.go:123] Gathering logs for kube-scheduler [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e] ...
	I1222 01:12:08.705820 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:12:08.742044 1585816 logs.go:123] Gathering logs for kube-controller-manager [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6] ...
	I1222 01:12:08.742195 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:12:08.783492 1585816 logs.go:123] Gathering logs for container status ...
	I1222 01:12:08.783534 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:12:11.321595 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:12:11.332367 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:12:11.332446 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:12:11.359622 1585816 cri.go:96] found id: "57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:12:11.359645 1585816 cri.go:96] found id: ""
	I1222 01:12:11.359653 1585816 logs.go:282] 1 containers: [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c]
	I1222 01:12:11.359712 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:12:11.363702 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:12:11.363780 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:12:11.388555 1585816 cri.go:96] found id: "1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:12:11.388580 1585816 cri.go:96] found id: ""
	I1222 01:12:11.388590 1585816 logs.go:282] 1 containers: [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d]
	I1222 01:12:11.388651 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:12:11.392508 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:12:11.392589 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:12:11.425101 1585816 cri.go:96] found id: ""
	I1222 01:12:11.425129 1585816 logs.go:282] 0 containers: []
	W1222 01:12:11.425138 1585816 logs.go:284] No container was found matching "coredns"
	I1222 01:12:11.425145 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:12:11.425210 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:12:11.451960 1585816 cri.go:96] found id: "ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:12:11.452039 1585816 cri.go:96] found id: ""
	I1222 01:12:11.452071 1585816 logs.go:282] 1 containers: [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e]
	I1222 01:12:11.452162 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:12:11.457450 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:12:11.457603 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:12:11.488181 1585816 cri.go:96] found id: ""
	I1222 01:12:11.488206 1585816 logs.go:282] 0 containers: []
	W1222 01:12:11.488215 1585816 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:12:11.488222 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:12:11.488285 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:12:11.516981 1585816 cri.go:96] found id: "7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:12:11.517005 1585816 cri.go:96] found id: ""
	I1222 01:12:11.517014 1585816 logs.go:282] 1 containers: [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6]
	I1222 01:12:11.517073 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:12:11.520945 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:12:11.521024 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:12:11.551481 1585816 cri.go:96] found id: ""
	I1222 01:12:11.551518 1585816 logs.go:282] 0 containers: []
	W1222 01:12:11.551528 1585816 logs.go:284] No container was found matching "kindnet"
	I1222 01:12:11.551551 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:12:11.551627 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:12:11.577612 1585816 cri.go:96] found id: ""
	I1222 01:12:11.577647 1585816 logs.go:282] 0 containers: []
	W1222 01:12:11.577657 1585816 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:12:11.577671 1585816 logs.go:123] Gathering logs for kubelet ...
	I1222 01:12:11.577683 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:12:11.636238 1585816 logs.go:123] Gathering logs for kube-apiserver [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c] ...
	I1222 01:12:11.636277 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:12:11.674619 1585816 logs.go:123] Gathering logs for kube-controller-manager [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6] ...
	I1222 01:12:11.674654 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:12:11.714737 1585816 logs.go:123] Gathering logs for container status ...
	I1222 01:12:11.714770 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:12:11.763747 1585816 logs.go:123] Gathering logs for dmesg ...
	I1222 01:12:11.763825 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:12:11.784086 1585816 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:12:11.784159 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:12:11.859531 1585816 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:12:11.859557 1585816 logs.go:123] Gathering logs for etcd [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d] ...
	I1222 01:12:11.859587 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:12:11.893444 1585816 logs.go:123] Gathering logs for kube-scheduler [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e] ...
	I1222 01:12:11.893477 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:12:11.933266 1585816 logs.go:123] Gathering logs for containerd ...
	I1222 01:12:11.933297 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:12:14.463874 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:12:14.474686 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:12:14.474763 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:12:14.500540 1585816 cri.go:96] found id: "57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:12:14.500564 1585816 cri.go:96] found id: ""
	I1222 01:12:14.500573 1585816 logs.go:282] 1 containers: [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c]
	I1222 01:12:14.500640 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:12:14.504668 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:12:14.504746 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:12:14.530148 1585816 cri.go:96] found id: "1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:12:14.530173 1585816 cri.go:96] found id: ""
	I1222 01:12:14.530183 1585816 logs.go:282] 1 containers: [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d]
	I1222 01:12:14.530240 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:12:14.534030 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:12:14.534131 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:12:14.560298 1585816 cri.go:96] found id: ""
	I1222 01:12:14.560321 1585816 logs.go:282] 0 containers: []
	W1222 01:12:14.560330 1585816 logs.go:284] No container was found matching "coredns"
	I1222 01:12:14.560342 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:12:14.560441 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:12:14.588636 1585816 cri.go:96] found id: "ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:12:14.588658 1585816 cri.go:96] found id: ""
	I1222 01:12:14.588666 1585816 logs.go:282] 1 containers: [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e]
	I1222 01:12:14.588725 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:12:14.592474 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:12:14.592549 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:12:14.622243 1585816 cri.go:96] found id: ""
	I1222 01:12:14.622266 1585816 logs.go:282] 0 containers: []
	W1222 01:12:14.622275 1585816 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:12:14.622281 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:12:14.622344 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:12:14.648746 1585816 cri.go:96] found id: "7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:12:14.648769 1585816 cri.go:96] found id: ""
	I1222 01:12:14.648778 1585816 logs.go:282] 1 containers: [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6]
	I1222 01:12:14.648838 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:12:14.652947 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:12:14.653025 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:12:14.678711 1585816 cri.go:96] found id: ""
	I1222 01:12:14.678737 1585816 logs.go:282] 0 containers: []
	W1222 01:12:14.678746 1585816 logs.go:284] No container was found matching "kindnet"
	I1222 01:12:14.678753 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:12:14.678850 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:12:14.709407 1585816 cri.go:96] found id: ""
	I1222 01:12:14.709439 1585816 logs.go:282] 0 containers: []
	W1222 01:12:14.709448 1585816 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:12:14.709462 1585816 logs.go:123] Gathering logs for kubelet ...
	I1222 01:12:14.709472 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:12:14.770844 1585816 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:12:14.770885 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:12:14.855301 1585816 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:12:14.855328 1585816 logs.go:123] Gathering logs for kube-apiserver [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c] ...
	I1222 01:12:14.855342 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:12:14.898919 1585816 logs.go:123] Gathering logs for etcd [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d] ...
	I1222 01:12:14.898951 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:12:14.931888 1585816 logs.go:123] Gathering logs for kube-controller-manager [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6] ...
	I1222 01:12:14.931925 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:12:14.966346 1585816 logs.go:123] Gathering logs for dmesg ...
	I1222 01:12:14.966382 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:12:14.981775 1585816 logs.go:123] Gathering logs for kube-scheduler [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e] ...
	I1222 01:12:14.981815 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:12:15.040796 1585816 logs.go:123] Gathering logs for containerd ...
	I1222 01:12:15.040843 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:12:15.073502 1585816 logs.go:123] Gathering logs for container status ...
	I1222 01:12:15.073549 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:12:17.621860 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:12:17.633305 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:12:17.633376 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:12:17.662710 1585816 cri.go:96] found id: "57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:12:17.662730 1585816 cri.go:96] found id: ""
	I1222 01:12:17.662739 1585816 logs.go:282] 1 containers: [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c]
	I1222 01:12:17.662797 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:12:17.667435 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:12:17.667508 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:12:17.696841 1585816 cri.go:96] found id: "1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:12:17.696915 1585816 cri.go:96] found id: ""
	I1222 01:12:17.696939 1585816 logs.go:282] 1 containers: [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d]
	I1222 01:12:17.697036 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:12:17.702833 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:12:17.702959 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:12:17.731918 1585816 cri.go:96] found id: ""
	I1222 01:12:17.731996 1585816 logs.go:282] 0 containers: []
	W1222 01:12:17.732019 1585816 logs.go:284] No container was found matching "coredns"
	I1222 01:12:17.732042 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:12:17.732128 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:12:17.808985 1585816 cri.go:96] found id: "ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:12:17.809056 1585816 cri.go:96] found id: ""
	I1222 01:12:17.809079 1585816 logs.go:282] 1 containers: [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e]
	I1222 01:12:17.809172 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:12:17.824677 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:12:17.824802 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:12:17.862234 1585816 cri.go:96] found id: ""
	I1222 01:12:17.862259 1585816 logs.go:282] 0 containers: []
	W1222 01:12:17.862269 1585816 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:12:17.862275 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:12:17.862360 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:12:17.899148 1585816 cri.go:96] found id: "7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:12:17.899173 1585816 cri.go:96] found id: ""
	I1222 01:12:17.899182 1585816 logs.go:282] 1 containers: [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6]
	I1222 01:12:17.899272 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:12:17.903344 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:12:17.903445 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:12:17.931262 1585816 cri.go:96] found id: ""
	I1222 01:12:17.931288 1585816 logs.go:282] 0 containers: []
	W1222 01:12:17.931297 1585816 logs.go:284] No container was found matching "kindnet"
	I1222 01:12:17.931305 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:12:17.931395 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:12:17.957724 1585816 cri.go:96] found id: ""
	I1222 01:12:17.957752 1585816 logs.go:282] 0 containers: []
	W1222 01:12:17.957761 1585816 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:12:17.957775 1585816 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:12:17.957818 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:12:18.031599 1585816 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:12:18.031624 1585816 logs.go:123] Gathering logs for etcd [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d] ...
	I1222 01:12:18.031640 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:12:18.068405 1585816 logs.go:123] Gathering logs for kube-controller-manager [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6] ...
	I1222 01:12:18.068441 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:12:18.113297 1585816 logs.go:123] Gathering logs for kubelet ...
	I1222 01:12:18.113330 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:12:18.173683 1585816 logs.go:123] Gathering logs for kube-apiserver [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c] ...
	I1222 01:12:18.173719 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:12:18.209329 1585816 logs.go:123] Gathering logs for kube-scheduler [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e] ...
	I1222 01:12:18.209364 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:12:18.244036 1585816 logs.go:123] Gathering logs for containerd ...
	I1222 01:12:18.244072 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:12:18.274261 1585816 logs.go:123] Gathering logs for container status ...
	I1222 01:12:18.274296 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:12:18.304551 1585816 logs.go:123] Gathering logs for dmesg ...
	I1222 01:12:18.304582 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:12:20.821531 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:12:20.832165 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:12:20.832238 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:12:20.859135 1585816 cri.go:96] found id: "57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:12:20.859160 1585816 cri.go:96] found id: ""
	I1222 01:12:20.859170 1585816 logs.go:282] 1 containers: [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c]
	I1222 01:12:20.859232 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:12:20.863592 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:12:20.863685 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:12:20.891390 1585816 cri.go:96] found id: "1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:12:20.891412 1585816 cri.go:96] found id: ""
	I1222 01:12:20.891421 1585816 logs.go:282] 1 containers: [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d]
	I1222 01:12:20.891475 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:12:20.895453 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:12:20.895541 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:12:20.922374 1585816 cri.go:96] found id: ""
	I1222 01:12:20.922403 1585816 logs.go:282] 0 containers: []
	W1222 01:12:20.922412 1585816 logs.go:284] No container was found matching "coredns"
	I1222 01:12:20.922418 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:12:20.922479 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:12:20.949452 1585816 cri.go:96] found id: "ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:12:20.949476 1585816 cri.go:96] found id: ""
	I1222 01:12:20.949484 1585816 logs.go:282] 1 containers: [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e]
	I1222 01:12:20.949541 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:12:20.953571 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:12:20.953656 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:12:20.980678 1585816 cri.go:96] found id: ""
	I1222 01:12:20.980705 1585816 logs.go:282] 0 containers: []
	W1222 01:12:20.980714 1585816 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:12:20.980720 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:12:20.980781 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:12:21.011999 1585816 cri.go:96] found id: "7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:12:21.012023 1585816 cri.go:96] found id: ""
	I1222 01:12:21.012032 1585816 logs.go:282] 1 containers: [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6]
	I1222 01:12:21.012114 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:12:21.016305 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:12:21.016437 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:12:21.044357 1585816 cri.go:96] found id: ""
	I1222 01:12:21.044403 1585816 logs.go:282] 0 containers: []
	W1222 01:12:21.044412 1585816 logs.go:284] No container was found matching "kindnet"
	I1222 01:12:21.044419 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:12:21.044482 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:12:21.073184 1585816 cri.go:96] found id: ""
	I1222 01:12:21.073214 1585816 logs.go:282] 0 containers: []
	W1222 01:12:21.073224 1585816 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:12:21.073237 1585816 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:12:21.073250 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:12:21.136867 1585816 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:12:21.136892 1585816 logs.go:123] Gathering logs for etcd [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d] ...
	I1222 01:12:21.136907 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:12:21.171171 1585816 logs.go:123] Gathering logs for containerd ...
	I1222 01:12:21.171207 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:12:21.201441 1585816 logs.go:123] Gathering logs for container status ...
	I1222 01:12:21.201476 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:12:21.232828 1585816 logs.go:123] Gathering logs for kubelet ...
	I1222 01:12:21.232862 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:12:21.290493 1585816 logs.go:123] Gathering logs for kube-apiserver [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c] ...
	I1222 01:12:21.290528 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:12:21.325498 1585816 logs.go:123] Gathering logs for kube-scheduler [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e] ...
	I1222 01:12:21.325529 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:12:21.360757 1585816 logs.go:123] Gathering logs for kube-controller-manager [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6] ...
	I1222 01:12:21.360790 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:12:21.397038 1585816 logs.go:123] Gathering logs for dmesg ...
	I1222 01:12:21.397070 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:12:23.912709 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:12:23.923170 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:12:23.923270 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:12:23.948094 1585816 cri.go:96] found id: "57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:12:23.948114 1585816 cri.go:96] found id: ""
	I1222 01:12:23.948122 1585816 logs.go:282] 1 containers: [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c]
	I1222 01:12:23.948181 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:12:23.952058 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:12:23.952136 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:12:23.979272 1585816 cri.go:96] found id: "1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:12:23.979297 1585816 cri.go:96] found id: ""
	I1222 01:12:23.979306 1585816 logs.go:282] 1 containers: [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d]
	I1222 01:12:23.979362 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:12:23.983197 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:12:23.983276 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:12:24.012457 1585816 cri.go:96] found id: ""
	I1222 01:12:24.012500 1585816 logs.go:282] 0 containers: []
	W1222 01:12:24.012510 1585816 logs.go:284] No container was found matching "coredns"
	I1222 01:12:24.012517 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:12:24.012603 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:12:24.041192 1585816 cri.go:96] found id: "ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:12:24.041280 1585816 cri.go:96] found id: ""
	I1222 01:12:24.041304 1585816 logs.go:282] 1 containers: [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e]
	I1222 01:12:24.041398 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:12:24.045426 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:12:24.045506 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:12:24.072177 1585816 cri.go:96] found id: ""
	I1222 01:12:24.072204 1585816 logs.go:282] 0 containers: []
	W1222 01:12:24.072213 1585816 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:12:24.072220 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:12:24.072281 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:12:24.099091 1585816 cri.go:96] found id: "7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:12:24.099115 1585816 cri.go:96] found id: ""
	I1222 01:12:24.099124 1585816 logs.go:282] 1 containers: [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6]
	I1222 01:12:24.099201 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:12:24.103086 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:12:24.103181 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:12:24.132952 1585816 cri.go:96] found id: ""
	I1222 01:12:24.132988 1585816 logs.go:282] 0 containers: []
	W1222 01:12:24.132997 1585816 logs.go:284] No container was found matching "kindnet"
	I1222 01:12:24.133003 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:12:24.133074 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:12:24.158927 1585816 cri.go:96] found id: ""
	I1222 01:12:24.158954 1585816 logs.go:282] 0 containers: []
	W1222 01:12:24.158964 1585816 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:12:24.158979 1585816 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:12:24.158992 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:12:24.224785 1585816 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:12:24.224808 1585816 logs.go:123] Gathering logs for kube-apiserver [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c] ...
	I1222 01:12:24.224829 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:12:24.265519 1585816 logs.go:123] Gathering logs for kube-scheduler [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e] ...
	I1222 01:12:24.265549 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:12:24.311725 1585816 logs.go:123] Gathering logs for kube-controller-manager [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6] ...
	I1222 01:12:24.311754 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:12:24.344613 1585816 logs.go:123] Gathering logs for containerd ...
	I1222 01:12:24.344644 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:12:24.377605 1585816 logs.go:123] Gathering logs for kubelet ...
	I1222 01:12:24.377640 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:12:24.436763 1585816 logs.go:123] Gathering logs for etcd [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d] ...
	I1222 01:12:24.436803 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:12:24.476371 1585816 logs.go:123] Gathering logs for container status ...
	I1222 01:12:24.476406 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:12:24.526841 1585816 logs.go:123] Gathering logs for dmesg ...
	I1222 01:12:24.526878 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:12:27.043585 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:12:27.053729 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:12:27.053813 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:12:27.079868 1585816 cri.go:96] found id: "57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:12:27.079890 1585816 cri.go:96] found id: ""
	I1222 01:12:27.079899 1585816 logs.go:282] 1 containers: [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c]
	I1222 01:12:27.079961 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:12:27.083760 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:12:27.083834 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:12:27.113642 1585816 cri.go:96] found id: "1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:12:27.113670 1585816 cri.go:96] found id: ""
	I1222 01:12:27.113693 1585816 logs.go:282] 1 containers: [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d]
	I1222 01:12:27.113754 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:12:27.117640 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:12:27.117739 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:12:27.142667 1585816 cri.go:96] found id: ""
	I1222 01:12:27.142735 1585816 logs.go:282] 0 containers: []
	W1222 01:12:27.142749 1585816 logs.go:284] No container was found matching "coredns"
	I1222 01:12:27.142756 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:12:27.142818 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:12:27.169479 1585816 cri.go:96] found id: "ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:12:27.169506 1585816 cri.go:96] found id: ""
	I1222 01:12:27.169523 1585816 logs.go:282] 1 containers: [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e]
	I1222 01:12:27.169603 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:12:27.173837 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:12:27.173939 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:12:27.207695 1585816 cri.go:96] found id: ""
	I1222 01:12:27.207766 1585816 logs.go:282] 0 containers: []
	W1222 01:12:27.207780 1585816 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:12:27.207788 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:12:27.207867 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:12:27.235111 1585816 cri.go:96] found id: "7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:12:27.235137 1585816 cri.go:96] found id: ""
	I1222 01:12:27.235155 1585816 logs.go:282] 1 containers: [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6]
	I1222 01:12:27.235218 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:12:27.239343 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:12:27.239420 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:12:27.265021 1585816 cri.go:96] found id: ""
	I1222 01:12:27.265053 1585816 logs.go:282] 0 containers: []
	W1222 01:12:27.265063 1585816 logs.go:284] No container was found matching "kindnet"
	I1222 01:12:27.265071 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:12:27.265133 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:12:27.291188 1585816 cri.go:96] found id: ""
	I1222 01:12:27.291212 1585816 logs.go:282] 0 containers: []
	W1222 01:12:27.291221 1585816 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:12:27.291238 1585816 logs.go:123] Gathering logs for dmesg ...
	I1222 01:12:27.291251 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:12:27.307486 1585816 logs.go:123] Gathering logs for kube-scheduler [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e] ...
	I1222 01:12:27.307514 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:12:27.345922 1585816 logs.go:123] Gathering logs for kube-controller-manager [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6] ...
	I1222 01:12:27.345958 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:12:27.379941 1585816 logs.go:123] Gathering logs for containerd ...
	I1222 01:12:27.379975 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:12:27.409021 1585816 logs.go:123] Gathering logs for container status ...
	I1222 01:12:27.409056 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:12:27.438046 1585816 logs.go:123] Gathering logs for kubelet ...
	I1222 01:12:27.438113 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:12:27.501785 1585816 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:12:27.501824 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:12:27.574542 1585816 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:12:27.574568 1585816 logs.go:123] Gathering logs for kube-apiserver [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c] ...
	I1222 01:12:27.574581 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:12:27.621353 1585816 logs.go:123] Gathering logs for etcd [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d] ...
	I1222 01:12:27.621385 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:12:30.153578 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:12:30.165417 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:12:30.165515 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:12:30.195815 1585816 cri.go:96] found id: "57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:12:30.195848 1585816 cri.go:96] found id: ""
	I1222 01:12:30.195857 1585816 logs.go:282] 1 containers: [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c]
	I1222 01:12:30.195938 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:12:30.200472 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:12:30.200555 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:12:30.229421 1585816 cri.go:96] found id: "1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:12:30.229442 1585816 cri.go:96] found id: ""
	I1222 01:12:30.229450 1585816 logs.go:282] 1 containers: [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d]
	I1222 01:12:30.229512 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:12:30.233731 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:12:30.233834 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:12:30.260373 1585816 cri.go:96] found id: ""
	I1222 01:12:30.260460 1585816 logs.go:282] 0 containers: []
	W1222 01:12:30.260487 1585816 logs.go:284] No container was found matching "coredns"
	I1222 01:12:30.260514 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:12:30.260589 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:12:30.286673 1585816 cri.go:96] found id: "ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:12:30.286694 1585816 cri.go:96] found id: ""
	I1222 01:12:30.286703 1585816 logs.go:282] 1 containers: [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e]
	I1222 01:12:30.286783 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:12:30.290851 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:12:30.290986 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:12:30.319685 1585816 cri.go:96] found id: ""
	I1222 01:12:30.319766 1585816 logs.go:282] 0 containers: []
	W1222 01:12:30.319789 1585816 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:12:30.319803 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:12:30.319891 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:12:30.349223 1585816 cri.go:96] found id: "7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:12:30.349269 1585816 cri.go:96] found id: ""
	I1222 01:12:30.349278 1585816 logs.go:282] 1 containers: [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6]
	I1222 01:12:30.349345 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:12:30.353262 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:12:30.353411 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:12:30.380646 1585816 cri.go:96] found id: ""
	I1222 01:12:30.380672 1585816 logs.go:282] 0 containers: []
	W1222 01:12:30.380681 1585816 logs.go:284] No container was found matching "kindnet"
	I1222 01:12:30.380688 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:12:30.380751 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:12:30.408585 1585816 cri.go:96] found id: ""
	I1222 01:12:30.408667 1585816 logs.go:282] 0 containers: []
	W1222 01:12:30.408692 1585816 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:12:30.408712 1585816 logs.go:123] Gathering logs for kube-scheduler [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e] ...
	I1222 01:12:30.408738 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:12:30.449019 1585816 logs.go:123] Gathering logs for containerd ...
	I1222 01:12:30.449051 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:12:30.477557 1585816 logs.go:123] Gathering logs for container status ...
	I1222 01:12:30.477593 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:12:30.510232 1585816 logs.go:123] Gathering logs for kubelet ...
	I1222 01:12:30.510262 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:12:30.575006 1585816 logs.go:123] Gathering logs for dmesg ...
	I1222 01:12:30.575044 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:12:30.591393 1585816 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:12:30.591425 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:12:30.656601 1585816 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:12:30.656679 1585816 logs.go:123] Gathering logs for kube-apiserver [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c] ...
	I1222 01:12:30.656707 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:12:30.694031 1585816 logs.go:123] Gathering logs for etcd [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d] ...
	I1222 01:12:30.694063 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:12:30.728455 1585816 logs.go:123] Gathering logs for kube-controller-manager [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6] ...
	I1222 01:12:30.728489 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:12:33.264568 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:12:33.275165 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:12:33.275240 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:12:33.301426 1585816 cri.go:96] found id: "57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:12:33.301447 1585816 cri.go:96] found id: ""
	I1222 01:12:33.301456 1585816 logs.go:282] 1 containers: [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c]
	I1222 01:12:33.301518 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:12:33.305362 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:12:33.305440 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:12:33.331871 1585816 cri.go:96] found id: "1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:12:33.331895 1585816 cri.go:96] found id: ""
	I1222 01:12:33.331905 1585816 logs.go:282] 1 containers: [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d]
	I1222 01:12:33.331961 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:12:33.335778 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:12:33.335852 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:12:33.361776 1585816 cri.go:96] found id: ""
	I1222 01:12:33.361799 1585816 logs.go:282] 0 containers: []
	W1222 01:12:33.361808 1585816 logs.go:284] No container was found matching "coredns"
	I1222 01:12:33.361815 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:12:33.361872 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:12:33.391513 1585816 cri.go:96] found id: "ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:12:33.391534 1585816 cri.go:96] found id: ""
	I1222 01:12:33.391542 1585816 logs.go:282] 1 containers: [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e]
	I1222 01:12:33.391603 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:12:33.395608 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:12:33.395685 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:12:33.421012 1585816 cri.go:96] found id: ""
	I1222 01:12:33.421039 1585816 logs.go:282] 0 containers: []
	W1222 01:12:33.421048 1585816 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:12:33.421059 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:12:33.421124 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:12:33.447731 1585816 cri.go:96] found id: "7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:12:33.447796 1585816 cri.go:96] found id: ""
	I1222 01:12:33.447879 1585816 logs.go:282] 1 containers: [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6]
	I1222 01:12:33.447963 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:12:33.455563 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:12:33.455697 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:12:33.482766 1585816 cri.go:96] found id: ""
	I1222 01:12:33.482835 1585816 logs.go:282] 0 containers: []
	W1222 01:12:33.482858 1585816 logs.go:284] No container was found matching "kindnet"
	I1222 01:12:33.482882 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:12:33.482960 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:12:33.516473 1585816 cri.go:96] found id: ""
	I1222 01:12:33.516499 1585816 logs.go:282] 0 containers: []
	W1222 01:12:33.516508 1585816 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:12:33.516521 1585816 logs.go:123] Gathering logs for kube-scheduler [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e] ...
	I1222 01:12:33.516534 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:12:33.557512 1585816 logs.go:123] Gathering logs for kubelet ...
	I1222 01:12:33.557545 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:12:33.616618 1585816 logs.go:123] Gathering logs for kube-controller-manager [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6] ...
	I1222 01:12:33.616654 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:12:33.655408 1585816 logs.go:123] Gathering logs for containerd ...
	I1222 01:12:33.655440 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:12:33.684963 1585816 logs.go:123] Gathering logs for container status ...
	I1222 01:12:33.685002 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:12:33.715086 1585816 logs.go:123] Gathering logs for dmesg ...
	I1222 01:12:33.715115 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:12:33.730474 1585816 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:12:33.730503 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:12:33.797028 1585816 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:12:33.797052 1585816 logs.go:123] Gathering logs for kube-apiserver [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c] ...
	I1222 01:12:33.797065 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:12:33.843617 1585816 logs.go:123] Gathering logs for etcd [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d] ...
	I1222 01:12:33.843650 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:12:36.386225 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:12:36.402389 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:12:36.402461 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:12:36.441495 1585816 cri.go:96] found id: "57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:12:36.441515 1585816 cri.go:96] found id: ""
	I1222 01:12:36.441523 1585816 logs.go:282] 1 containers: [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c]
	I1222 01:12:36.441583 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:12:36.450064 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:12:36.450166 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:12:36.498422 1585816 cri.go:96] found id: "1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:12:36.498448 1585816 cri.go:96] found id: ""
	I1222 01:12:36.498456 1585816 logs.go:282] 1 containers: [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d]
	I1222 01:12:36.498514 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:12:36.502997 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:12:36.503072 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:12:36.552749 1585816 cri.go:96] found id: ""
	I1222 01:12:36.552772 1585816 logs.go:282] 0 containers: []
	W1222 01:12:36.552781 1585816 logs.go:284] No container was found matching "coredns"
	I1222 01:12:36.552787 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:12:36.552851 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:12:36.601812 1585816 cri.go:96] found id: "ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:12:36.601834 1585816 cri.go:96] found id: ""
	I1222 01:12:36.601842 1585816 logs.go:282] 1 containers: [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e]
	I1222 01:12:36.601903 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:12:36.607005 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:12:36.607080 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:12:36.639329 1585816 cri.go:96] found id: ""
	I1222 01:12:36.639352 1585816 logs.go:282] 0 containers: []
	W1222 01:12:36.639361 1585816 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:12:36.639368 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:12:36.639427 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:12:36.670703 1585816 cri.go:96] found id: "7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:12:36.670724 1585816 cri.go:96] found id: ""
	I1222 01:12:36.670732 1585816 logs.go:282] 1 containers: [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6]
	I1222 01:12:36.670789 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:12:36.675016 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:12:36.675148 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:12:36.714625 1585816 cri.go:96] found id: ""
	I1222 01:12:36.714702 1585816 logs.go:282] 0 containers: []
	W1222 01:12:36.714726 1585816 logs.go:284] No container was found matching "kindnet"
	I1222 01:12:36.714748 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:12:36.714857 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:12:36.747349 1585816 cri.go:96] found id: ""
	I1222 01:12:36.747425 1585816 logs.go:282] 0 containers: []
	W1222 01:12:36.747448 1585816 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:12:36.747495 1585816 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:12:36.747526 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:12:36.834756 1585816 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:12:36.834773 1585816 logs.go:123] Gathering logs for etcd [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d] ...
	I1222 01:12:36.834785 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:12:36.879900 1585816 logs.go:123] Gathering logs for kube-scheduler [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e] ...
	I1222 01:12:36.879935 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:12:36.922452 1585816 logs.go:123] Gathering logs for containerd ...
	I1222 01:12:36.922490 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:12:36.952050 1585816 logs.go:123] Gathering logs for container status ...
	I1222 01:12:36.952082 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:12:37.021967 1585816 logs.go:123] Gathering logs for kubelet ...
	I1222 01:12:37.022008 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:12:37.099242 1585816 logs.go:123] Gathering logs for kube-apiserver [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c] ...
	I1222 01:12:37.099322 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:12:37.144305 1585816 logs.go:123] Gathering logs for kube-controller-manager [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6] ...
	I1222 01:12:37.144385 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:12:37.180177 1585816 logs.go:123] Gathering logs for dmesg ...
	I1222 01:12:37.180213 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:12:39.699126 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:12:39.709504 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:12:39.709581 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:12:39.737557 1585816 cri.go:96] found id: "57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:12:39.737579 1585816 cri.go:96] found id: ""
	I1222 01:12:39.737588 1585816 logs.go:282] 1 containers: [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c]
	I1222 01:12:39.737643 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:12:39.741500 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:12:39.741580 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:12:39.768871 1585816 cri.go:96] found id: "1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:12:39.768893 1585816 cri.go:96] found id: ""
	I1222 01:12:39.768903 1585816 logs.go:282] 1 containers: [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d]
	I1222 01:12:39.768959 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:12:39.773081 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:12:39.773178 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:12:39.798004 1585816 cri.go:96] found id: ""
	I1222 01:12:39.798028 1585816 logs.go:282] 0 containers: []
	W1222 01:12:39.798036 1585816 logs.go:284] No container was found matching "coredns"
	I1222 01:12:39.798042 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:12:39.798135 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:12:39.823755 1585816 cri.go:96] found id: "ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:12:39.823778 1585816 cri.go:96] found id: ""
	I1222 01:12:39.823786 1585816 logs.go:282] 1 containers: [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e]
	I1222 01:12:39.823847 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:12:39.827871 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:12:39.827959 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:12:39.856981 1585816 cri.go:96] found id: ""
	I1222 01:12:39.857005 1585816 logs.go:282] 0 containers: []
	W1222 01:12:39.857014 1585816 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:12:39.857020 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:12:39.857090 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:12:39.882869 1585816 cri.go:96] found id: "7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:12:39.882896 1585816 cri.go:96] found id: ""
	I1222 01:12:39.882905 1585816 logs.go:282] 1 containers: [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6]
	I1222 01:12:39.882986 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:12:39.887047 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:12:39.887166 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:12:39.913327 1585816 cri.go:96] found id: ""
	I1222 01:12:39.913354 1585816 logs.go:282] 0 containers: []
	W1222 01:12:39.913364 1585816 logs.go:284] No container was found matching "kindnet"
	I1222 01:12:39.913370 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:12:39.913432 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:12:39.941622 1585816 cri.go:96] found id: ""
	I1222 01:12:39.941649 1585816 logs.go:282] 0 containers: []
	W1222 01:12:39.941659 1585816 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:12:39.941673 1585816 logs.go:123] Gathering logs for dmesg ...
	I1222 01:12:39.941698 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:12:39.961350 1585816 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:12:39.961482 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:12:40.073750 1585816 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:12:40.073776 1585816 logs.go:123] Gathering logs for containerd ...
	I1222 01:12:40.073793 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:12:40.113045 1585816 logs.go:123] Gathering logs for kubelet ...
	I1222 01:12:40.113116 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:12:40.184826 1585816 logs.go:123] Gathering logs for kube-apiserver [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c] ...
	I1222 01:12:40.184864 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:12:40.249971 1585816 logs.go:123] Gathering logs for etcd [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d] ...
	I1222 01:12:40.250004 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:12:40.305809 1585816 logs.go:123] Gathering logs for kube-scheduler [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e] ...
	I1222 01:12:40.305841 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:12:40.352199 1585816 logs.go:123] Gathering logs for kube-controller-manager [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6] ...
	I1222 01:12:40.352234 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:12:40.395699 1585816 logs.go:123] Gathering logs for container status ...
	I1222 01:12:40.395731 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:12:42.957074 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:12:42.968644 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:12:42.968714 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:12:42.995485 1585816 cri.go:96] found id: "57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:12:42.995519 1585816 cri.go:96] found id: ""
	I1222 01:12:42.995528 1585816 logs.go:282] 1 containers: [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c]
	I1222 01:12:42.995586 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:12:42.999649 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:12:42.999817 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:12:43.033815 1585816 cri.go:96] found id: "1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:12:43.033839 1585816 cri.go:96] found id: ""
	I1222 01:12:43.033848 1585816 logs.go:282] 1 containers: [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d]
	I1222 01:12:43.033908 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:12:43.037731 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:12:43.037808 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:12:43.063858 1585816 cri.go:96] found id: ""
	I1222 01:12:43.063882 1585816 logs.go:282] 0 containers: []
	W1222 01:12:43.063890 1585816 logs.go:284] No container was found matching "coredns"
	I1222 01:12:43.063896 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:12:43.063967 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:12:43.090543 1585816 cri.go:96] found id: "ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:12:43.090563 1585816 cri.go:96] found id: ""
	I1222 01:12:43.090571 1585816 logs.go:282] 1 containers: [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e]
	I1222 01:12:43.090658 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:12:43.094430 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:12:43.094510 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:12:43.124594 1585816 cri.go:96] found id: ""
	I1222 01:12:43.124621 1585816 logs.go:282] 0 containers: []
	W1222 01:12:43.124630 1585816 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:12:43.124636 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:12:43.124700 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:12:43.150599 1585816 cri.go:96] found id: "7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:12:43.150624 1585816 cri.go:96] found id: ""
	I1222 01:12:43.150639 1585816 logs.go:282] 1 containers: [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6]
	I1222 01:12:43.150696 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:12:43.154578 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:12:43.154687 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:12:43.180787 1585816 cri.go:96] found id: ""
	I1222 01:12:43.180811 1585816 logs.go:282] 0 containers: []
	W1222 01:12:43.180819 1585816 logs.go:284] No container was found matching "kindnet"
	I1222 01:12:43.180826 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:12:43.180886 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:12:43.211007 1585816 cri.go:96] found id: ""
	I1222 01:12:43.211034 1585816 logs.go:282] 0 containers: []
	W1222 01:12:43.211043 1585816 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:12:43.211059 1585816 logs.go:123] Gathering logs for kubelet ...
	I1222 01:12:43.211070 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:12:43.273237 1585816 logs.go:123] Gathering logs for dmesg ...
	I1222 01:12:43.273280 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:12:43.291708 1585816 logs.go:123] Gathering logs for etcd [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d] ...
	I1222 01:12:43.291738 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:12:43.332293 1585816 logs.go:123] Gathering logs for containerd ...
	I1222 01:12:43.332329 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:12:43.365030 1585816 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:12:43.365063 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:12:43.434180 1585816 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:12:43.434251 1585816 logs.go:123] Gathering logs for kube-apiserver [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c] ...
	I1222 01:12:43.434273 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:12:43.476114 1585816 logs.go:123] Gathering logs for kube-scheduler [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e] ...
	I1222 01:12:43.476149 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:12:43.520490 1585816 logs.go:123] Gathering logs for kube-controller-manager [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6] ...
	I1222 01:12:43.520521 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:12:43.561867 1585816 logs.go:123] Gathering logs for container status ...
	I1222 01:12:43.561897 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:12:46.093546 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:12:46.104676 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:12:46.104749 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:12:46.137693 1585816 cri.go:96] found id: "57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:12:46.137714 1585816 cri.go:96] found id: ""
	I1222 01:12:46.137724 1585816 logs.go:282] 1 containers: [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c]
	I1222 01:12:46.137793 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:12:46.141933 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:12:46.142014 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:12:46.169455 1585816 cri.go:96] found id: "1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:12:46.169480 1585816 cri.go:96] found id: ""
	I1222 01:12:46.169489 1585816 logs.go:282] 1 containers: [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d]
	I1222 01:12:46.169556 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:12:46.173687 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:12:46.173767 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:12:46.200440 1585816 cri.go:96] found id: ""
	I1222 01:12:46.200464 1585816 logs.go:282] 0 containers: []
	W1222 01:12:46.200474 1585816 logs.go:284] No container was found matching "coredns"
	I1222 01:12:46.200480 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:12:46.200542 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:12:46.226780 1585816 cri.go:96] found id: "ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:12:46.226807 1585816 cri.go:96] found id: ""
	I1222 01:12:46.226816 1585816 logs.go:282] 1 containers: [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e]
	I1222 01:12:46.226879 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:12:46.230634 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:12:46.230726 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:12:46.267642 1585816 cri.go:96] found id: ""
	I1222 01:12:46.267667 1585816 logs.go:282] 0 containers: []
	W1222 01:12:46.267676 1585816 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:12:46.267683 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:12:46.267742 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:12:46.304639 1585816 cri.go:96] found id: "7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:12:46.304661 1585816 cri.go:96] found id: ""
	I1222 01:12:46.304669 1585816 logs.go:282] 1 containers: [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6]
	I1222 01:12:46.304728 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:12:46.308918 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:12:46.309002 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:12:46.334583 1585816 cri.go:96] found id: ""
	I1222 01:12:46.334608 1585816 logs.go:282] 0 containers: []
	W1222 01:12:46.334616 1585816 logs.go:284] No container was found matching "kindnet"
	I1222 01:12:46.334623 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:12:46.334687 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:12:46.359512 1585816 cri.go:96] found id: ""
	I1222 01:12:46.359537 1585816 logs.go:282] 0 containers: []
	W1222 01:12:46.359546 1585816 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:12:46.359561 1585816 logs.go:123] Gathering logs for kube-controller-manager [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6] ...
	I1222 01:12:46.359573 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:12:46.406786 1585816 logs.go:123] Gathering logs for kubelet ...
	I1222 01:12:46.406818 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:12:46.464820 1585816 logs.go:123] Gathering logs for dmesg ...
	I1222 01:12:46.464854 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:12:46.480080 1585816 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:12:46.480108 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:12:46.546152 1585816 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:12:46.546171 1585816 logs.go:123] Gathering logs for kube-apiserver [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c] ...
	I1222 01:12:46.546183 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:12:46.583741 1585816 logs.go:123] Gathering logs for etcd [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d] ...
	I1222 01:12:46.583775 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:12:46.615971 1585816 logs.go:123] Gathering logs for kube-scheduler [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e] ...
	I1222 01:12:46.616005 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:12:46.650053 1585816 logs.go:123] Gathering logs for containerd ...
	I1222 01:12:46.650092 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:12:46.680153 1585816 logs.go:123] Gathering logs for container status ...
	I1222 01:12:46.680192 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:12:49.209563 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:12:49.219625 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:12:49.219697 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:12:49.256172 1585816 cri.go:96] found id: "57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:12:49.256197 1585816 cri.go:96] found id: ""
	I1222 01:12:49.256205 1585816 logs.go:282] 1 containers: [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c]
	I1222 01:12:49.256263 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:12:49.260548 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:12:49.260621 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:12:49.293694 1585816 cri.go:96] found id: "1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:12:49.293718 1585816 cri.go:96] found id: ""
	I1222 01:12:49.293728 1585816 logs.go:282] 1 containers: [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d]
	I1222 01:12:49.293790 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:12:49.298132 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:12:49.298202 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:12:49.335964 1585816 cri.go:96] found id: ""
	I1222 01:12:49.335995 1585816 logs.go:282] 0 containers: []
	W1222 01:12:49.336006 1585816 logs.go:284] No container was found matching "coredns"
	I1222 01:12:49.336013 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:12:49.336078 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:12:49.361796 1585816 cri.go:96] found id: "ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:12:49.361817 1585816 cri.go:96] found id: ""
	I1222 01:12:49.361825 1585816 logs.go:282] 1 containers: [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e]
	I1222 01:12:49.361891 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:12:49.365922 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:12:49.366007 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:12:49.395049 1585816 cri.go:96] found id: ""
	I1222 01:12:49.395130 1585816 logs.go:282] 0 containers: []
	W1222 01:12:49.395154 1585816 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:12:49.395176 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:12:49.395255 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:12:49.421032 1585816 cri.go:96] found id: "7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:12:49.421059 1585816 cri.go:96] found id: ""
	I1222 01:12:49.421067 1585816 logs.go:282] 1 containers: [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6]
	I1222 01:12:49.421125 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:12:49.424910 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:12:49.425026 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:12:49.460706 1585816 cri.go:96] found id: ""
	I1222 01:12:49.460782 1585816 logs.go:282] 0 containers: []
	W1222 01:12:49.460806 1585816 logs.go:284] No container was found matching "kindnet"
	I1222 01:12:49.460830 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:12:49.460909 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:12:49.489461 1585816 cri.go:96] found id: ""
	I1222 01:12:49.489542 1585816 logs.go:282] 0 containers: []
	W1222 01:12:49.489568 1585816 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:12:49.489598 1585816 logs.go:123] Gathering logs for etcd [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d] ...
	I1222 01:12:49.489628 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:12:49.523452 1585816 logs.go:123] Gathering logs for kube-scheduler [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e] ...
	I1222 01:12:49.523486 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:12:49.558107 1585816 logs.go:123] Gathering logs for kube-controller-manager [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6] ...
	I1222 01:12:49.558138 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:12:49.607005 1585816 logs.go:123] Gathering logs for containerd ...
	I1222 01:12:49.607035 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:12:49.635042 1585816 logs.go:123] Gathering logs for container status ...
	I1222 01:12:49.635074 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:12:49.669571 1585816 logs.go:123] Gathering logs for kubelet ...
	I1222 01:12:49.669599 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:12:49.728436 1585816 logs.go:123] Gathering logs for dmesg ...
	I1222 01:12:49.728470 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:12:49.743881 1585816 logs.go:123] Gathering logs for kube-apiserver [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c] ...
	I1222 01:12:49.743910 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:12:49.797910 1585816 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:12:49.797945 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:12:49.868598 1585816 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:12:52.370235 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:12:52.381746 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:12:52.381826 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:12:52.409576 1585816 cri.go:96] found id: "57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:12:52.409602 1585816 cri.go:96] found id: ""
	I1222 01:12:52.409610 1585816 logs.go:282] 1 containers: [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c]
	I1222 01:12:52.409667 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:12:52.414033 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:12:52.414128 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:12:52.450619 1585816 cri.go:96] found id: "1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:12:52.450642 1585816 cri.go:96] found id: ""
	I1222 01:12:52.450652 1585816 logs.go:282] 1 containers: [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d]
	I1222 01:12:52.450712 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:12:52.461211 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:12:52.461282 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:12:52.493432 1585816 cri.go:96] found id: ""
	I1222 01:12:52.493461 1585816 logs.go:282] 0 containers: []
	W1222 01:12:52.493469 1585816 logs.go:284] No container was found matching "coredns"
	I1222 01:12:52.493476 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:12:52.493538 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:12:52.529070 1585816 cri.go:96] found id: "ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:12:52.529095 1585816 cri.go:96] found id: ""
	I1222 01:12:52.529102 1585816 logs.go:282] 1 containers: [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e]
	I1222 01:12:52.529163 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:12:52.533428 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:12:52.533504 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:12:52.563716 1585816 cri.go:96] found id: ""
	I1222 01:12:52.563744 1585816 logs.go:282] 0 containers: []
	W1222 01:12:52.563753 1585816 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:12:52.563759 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:12:52.563819 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:12:52.597113 1585816 cri.go:96] found id: "7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:12:52.597136 1585816 cri.go:96] found id: ""
	I1222 01:12:52.597144 1585816 logs.go:282] 1 containers: [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6]
	I1222 01:12:52.597199 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:12:52.603204 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:12:52.603283 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:12:52.635678 1585816 cri.go:96] found id: ""
	I1222 01:12:52.635708 1585816 logs.go:282] 0 containers: []
	W1222 01:12:52.635717 1585816 logs.go:284] No container was found matching "kindnet"
	I1222 01:12:52.635723 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:12:52.635783 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:12:52.676345 1585816 cri.go:96] found id: ""
	I1222 01:12:52.676383 1585816 logs.go:282] 0 containers: []
	W1222 01:12:52.676393 1585816 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:12:52.676406 1585816 logs.go:123] Gathering logs for etcd [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d] ...
	I1222 01:12:52.676417 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:12:52.732708 1585816 logs.go:123] Gathering logs for kube-scheduler [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e] ...
	I1222 01:12:52.732741 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:12:52.771746 1585816 logs.go:123] Gathering logs for kube-controller-manager [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6] ...
	I1222 01:12:52.771780 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:12:52.807834 1585816 logs.go:123] Gathering logs for kubelet ...
	I1222 01:12:52.807866 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:12:52.869478 1585816 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:12:52.869514 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:12:52.935339 1585816 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:12:52.935410 1585816 logs.go:123] Gathering logs for containerd ...
	I1222 01:12:52.935438 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:12:52.963670 1585816 logs.go:123] Gathering logs for container status ...
	I1222 01:12:52.963704 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:12:53.022501 1585816 logs.go:123] Gathering logs for dmesg ...
	I1222 01:12:53.022575 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:12:53.042599 1585816 logs.go:123] Gathering logs for kube-apiserver [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c] ...
	I1222 01:12:53.042678 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:12:55.591975 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:12:55.605914 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:12:55.605988 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:12:55.640995 1585816 cri.go:96] found id: "57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:12:55.641022 1585816 cri.go:96] found id: ""
	I1222 01:12:55.641036 1585816 logs.go:282] 1 containers: [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c]
	I1222 01:12:55.641099 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:12:55.644945 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:12:55.645020 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:12:55.683439 1585816 cri.go:96] found id: "1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:12:55.683460 1585816 cri.go:96] found id: ""
	I1222 01:12:55.683469 1585816 logs.go:282] 1 containers: [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d]
	I1222 01:12:55.683525 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:12:55.688766 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:12:55.688845 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:12:55.726398 1585816 cri.go:96] found id: ""
	I1222 01:12:55.726421 1585816 logs.go:282] 0 containers: []
	W1222 01:12:55.726430 1585816 logs.go:284] No container was found matching "coredns"
	I1222 01:12:55.726436 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:12:55.726504 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:12:55.754529 1585816 cri.go:96] found id: "ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:12:55.754550 1585816 cri.go:96] found id: ""
	I1222 01:12:55.754558 1585816 logs.go:282] 1 containers: [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e]
	I1222 01:12:55.754617 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:12:55.759070 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:12:55.759144 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:12:55.803103 1585816 cri.go:96] found id: ""
	I1222 01:12:55.803125 1585816 logs.go:282] 0 containers: []
	W1222 01:12:55.803185 1585816 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:12:55.803196 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:12:55.803281 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:12:55.844719 1585816 cri.go:96] found id: "7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:12:55.844768 1585816 cri.go:96] found id: ""
	I1222 01:12:55.844777 1585816 logs.go:282] 1 containers: [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6]
	I1222 01:12:55.844861 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:12:55.849188 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:12:55.849260 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:12:55.876481 1585816 cri.go:96] found id: ""
	I1222 01:12:55.876507 1585816 logs.go:282] 0 containers: []
	W1222 01:12:55.876516 1585816 logs.go:284] No container was found matching "kindnet"
	I1222 01:12:55.876544 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:12:55.876631 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:12:55.909134 1585816 cri.go:96] found id: ""
	I1222 01:12:55.909161 1585816 logs.go:282] 0 containers: []
	W1222 01:12:55.909168 1585816 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:12:55.909214 1585816 logs.go:123] Gathering logs for kube-apiserver [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c] ...
	I1222 01:12:55.909235 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:12:55.972153 1585816 logs.go:123] Gathering logs for kube-scheduler [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e] ...
	I1222 01:12:55.972185 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:12:56.017522 1585816 logs.go:123] Gathering logs for containerd ...
	I1222 01:12:56.017556 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:12:56.057718 1585816 logs.go:123] Gathering logs for kubelet ...
	I1222 01:12:56.057750 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:12:56.177454 1585816 logs.go:123] Gathering logs for dmesg ...
	I1222 01:12:56.177493 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:12:56.194141 1585816 logs.go:123] Gathering logs for etcd [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d] ...
	I1222 01:12:56.194177 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:12:56.238954 1585816 logs.go:123] Gathering logs for kube-controller-manager [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6] ...
	I1222 01:12:56.238985 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:12:56.297302 1585816 logs.go:123] Gathering logs for container status ...
	I1222 01:12:56.297387 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:12:56.345534 1585816 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:12:56.345561 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:12:56.435881 1585816 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:12:58.936681 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:12:58.947137 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:12:58.947210 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:12:58.978733 1585816 cri.go:96] found id: "57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:12:58.978753 1585816 cri.go:96] found id: ""
	I1222 01:12:58.978761 1585816 logs.go:282] 1 containers: [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c]
	I1222 01:12:58.978821 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:12:58.982741 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:12:58.982813 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:12:59.011460 1585816 cri.go:96] found id: "1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:12:59.011487 1585816 cri.go:96] found id: ""
	I1222 01:12:59.011494 1585816 logs.go:282] 1 containers: [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d]
	I1222 01:12:59.011554 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:12:59.015601 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:12:59.015671 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:12:59.041783 1585816 cri.go:96] found id: ""
	I1222 01:12:59.041806 1585816 logs.go:282] 0 containers: []
	W1222 01:12:59.041814 1585816 logs.go:284] No container was found matching "coredns"
	I1222 01:12:59.041821 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:12:59.041878 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:12:59.068843 1585816 cri.go:96] found id: "ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:12:59.068868 1585816 cri.go:96] found id: ""
	I1222 01:12:59.068877 1585816 logs.go:282] 1 containers: [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e]
	I1222 01:12:59.068934 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:12:59.072714 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:12:59.072817 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:12:59.098833 1585816 cri.go:96] found id: ""
	I1222 01:12:59.098858 1585816 logs.go:282] 0 containers: []
	W1222 01:12:59.098867 1585816 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:12:59.098873 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:12:59.098933 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:12:59.124819 1585816 cri.go:96] found id: "7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:12:59.124840 1585816 cri.go:96] found id: ""
	I1222 01:12:59.124848 1585816 logs.go:282] 1 containers: [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6]
	I1222 01:12:59.124908 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:12:59.128476 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:12:59.128552 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:12:59.153699 1585816 cri.go:96] found id: ""
	I1222 01:12:59.153721 1585816 logs.go:282] 0 containers: []
	W1222 01:12:59.153729 1585816 logs.go:284] No container was found matching "kindnet"
	I1222 01:12:59.153736 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:12:59.153801 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:12:59.179337 1585816 cri.go:96] found id: ""
	I1222 01:12:59.179363 1585816 logs.go:282] 0 containers: []
	W1222 01:12:59.179372 1585816 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:12:59.179385 1585816 logs.go:123] Gathering logs for kube-apiserver [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c] ...
	I1222 01:12:59.179397 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:12:59.215586 1585816 logs.go:123] Gathering logs for etcd [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d] ...
	I1222 01:12:59.215661 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:12:59.265071 1585816 logs.go:123] Gathering logs for kube-scheduler [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e] ...
	I1222 01:12:59.265146 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:12:59.364929 1585816 logs.go:123] Gathering logs for kube-controller-manager [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6] ...
	I1222 01:12:59.365005 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:12:59.401553 1585816 logs.go:123] Gathering logs for container status ...
	I1222 01:12:59.401627 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:12:59.443187 1585816 logs.go:123] Gathering logs for kubelet ...
	I1222 01:12:59.443208 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:12:59.516461 1585816 logs.go:123] Gathering logs for dmesg ...
	I1222 01:12:59.516502 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:12:59.532286 1585816 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:12:59.532316 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:12:59.635317 1585816 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:12:59.635345 1585816 logs.go:123] Gathering logs for containerd ...
	I1222 01:12:59.635358 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:13:02.170219 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:13:02.181383 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:13:02.181457 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:13:02.208145 1585816 cri.go:96] found id: "57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:13:02.208168 1585816 cri.go:96] found id: ""
	I1222 01:13:02.208176 1585816 logs.go:282] 1 containers: [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c]
	I1222 01:13:02.208234 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:13:02.212225 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:13:02.212305 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:13:02.248545 1585816 cri.go:96] found id: "1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:13:02.248570 1585816 cri.go:96] found id: ""
	I1222 01:13:02.248578 1585816 logs.go:282] 1 containers: [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d]
	I1222 01:13:02.248637 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:13:02.253996 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:13:02.254111 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:13:02.288425 1585816 cri.go:96] found id: ""
	I1222 01:13:02.288456 1585816 logs.go:282] 0 containers: []
	W1222 01:13:02.288465 1585816 logs.go:284] No container was found matching "coredns"
	I1222 01:13:02.288472 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:13:02.288538 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:13:02.318810 1585816 cri.go:96] found id: "ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:13:02.318835 1585816 cri.go:96] found id: ""
	I1222 01:13:02.318844 1585816 logs.go:282] 1 containers: [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e]
	I1222 01:13:02.318907 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:13:02.322888 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:13:02.322961 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:13:02.347713 1585816 cri.go:96] found id: ""
	I1222 01:13:02.347738 1585816 logs.go:282] 0 containers: []
	W1222 01:13:02.347747 1585816 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:13:02.347753 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:13:02.347815 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:13:02.376155 1585816 cri.go:96] found id: "7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:13:02.376180 1585816 cri.go:96] found id: ""
	I1222 01:13:02.376189 1585816 logs.go:282] 1 containers: [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6]
	I1222 01:13:02.376244 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:13:02.380087 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:13:02.380193 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:13:02.409584 1585816 cri.go:96] found id: ""
	I1222 01:13:02.409611 1585816 logs.go:282] 0 containers: []
	W1222 01:13:02.409620 1585816 logs.go:284] No container was found matching "kindnet"
	I1222 01:13:02.409627 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:13:02.409694 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:13:02.445208 1585816 cri.go:96] found id: ""
	I1222 01:13:02.445235 1585816 logs.go:282] 0 containers: []
	W1222 01:13:02.445244 1585816 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:13:02.445276 1585816 logs.go:123] Gathering logs for dmesg ...
	I1222 01:13:02.445294 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:13:02.467401 1585816 logs.go:123] Gathering logs for kube-apiserver [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c] ...
	I1222 01:13:02.467432 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:13:02.505713 1585816 logs.go:123] Gathering logs for containerd ...
	I1222 01:13:02.505747 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:13:02.536081 1585816 logs.go:123] Gathering logs for container status ...
	I1222 01:13:02.536116 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:13:02.569084 1585816 logs.go:123] Gathering logs for kubelet ...
	I1222 01:13:02.569115 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:13:02.633585 1585816 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:13:02.633626 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:13:02.701862 1585816 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:13:02.701934 1585816 logs.go:123] Gathering logs for etcd [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d] ...
	I1222 01:13:02.701963 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:13:02.738609 1585816 logs.go:123] Gathering logs for kube-scheduler [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e] ...
	I1222 01:13:02.738643 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:13:02.771849 1585816 logs.go:123] Gathering logs for kube-controller-manager [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6] ...
	I1222 01:13:02.771884 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:13:05.318300 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:13:05.329059 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:13:05.329130 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:13:05.355965 1585816 cri.go:96] found id: "57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:13:05.355985 1585816 cri.go:96] found id: ""
	I1222 01:13:05.355993 1585816 logs.go:282] 1 containers: [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c]
	I1222 01:13:05.356053 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:13:05.360142 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:13:05.360217 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:13:05.393897 1585816 cri.go:96] found id: "1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:13:05.393922 1585816 cri.go:96] found id: ""
	I1222 01:13:05.393931 1585816 logs.go:282] 1 containers: [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d]
	I1222 01:13:05.393992 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:13:05.397931 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:13:05.398008 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:13:05.423142 1585816 cri.go:96] found id: ""
	I1222 01:13:05.423165 1585816 logs.go:282] 0 containers: []
	W1222 01:13:05.423172 1585816 logs.go:284] No container was found matching "coredns"
	I1222 01:13:05.423178 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:13:05.423237 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:13:05.456617 1585816 cri.go:96] found id: "ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:13:05.456637 1585816 cri.go:96] found id: ""
	I1222 01:13:05.456646 1585816 logs.go:282] 1 containers: [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e]
	I1222 01:13:05.456703 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:13:05.460430 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:13:05.460508 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:13:05.489805 1585816 cri.go:96] found id: ""
	I1222 01:13:05.489831 1585816 logs.go:282] 0 containers: []
	W1222 01:13:05.489840 1585816 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:13:05.489846 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:13:05.489906 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:13:05.517220 1585816 cri.go:96] found id: "7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:13:05.517244 1585816 cri.go:96] found id: ""
	I1222 01:13:05.517252 1585816 logs.go:282] 1 containers: [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6]
	I1222 01:13:05.517315 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:13:05.521282 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:13:05.521363 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:13:05.546925 1585816 cri.go:96] found id: ""
	I1222 01:13:05.546952 1585816 logs.go:282] 0 containers: []
	W1222 01:13:05.546961 1585816 logs.go:284] No container was found matching "kindnet"
	I1222 01:13:05.546971 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:13:05.547033 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:13:05.575217 1585816 cri.go:96] found id: ""
	I1222 01:13:05.575246 1585816 logs.go:282] 0 containers: []
	W1222 01:13:05.575257 1585816 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:13:05.575274 1585816 logs.go:123] Gathering logs for dmesg ...
	I1222 01:13:05.575290 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:13:05.592255 1585816 logs.go:123] Gathering logs for containerd ...
	I1222 01:13:05.592338 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:13:05.623646 1585816 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:13:05.623689 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:13:05.692110 1585816 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:13:05.692184 1585816 logs.go:123] Gathering logs for kube-apiserver [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c] ...
	I1222 01:13:05.692212 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:13:05.731854 1585816 logs.go:123] Gathering logs for etcd [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d] ...
	I1222 01:13:05.731885 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:13:05.766538 1585816 logs.go:123] Gathering logs for kube-scheduler [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e] ...
	I1222 01:13:05.766569 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:13:05.801952 1585816 logs.go:123] Gathering logs for kube-controller-manager [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6] ...
	I1222 01:13:05.801984 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:13:05.852072 1585816 logs.go:123] Gathering logs for container status ...
	I1222 01:13:05.852107 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:13:05.885689 1585816 logs.go:123] Gathering logs for kubelet ...
	I1222 01:13:05.885721 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:13:08.448314 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:13:08.466449 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:13:08.466519 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:13:08.515188 1585816 cri.go:96] found id: "57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:13:08.515211 1585816 cri.go:96] found id: ""
	I1222 01:13:08.515220 1585816 logs.go:282] 1 containers: [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c]
	I1222 01:13:08.515283 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:13:08.519372 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:13:08.519454 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:13:08.558497 1585816 cri.go:96] found id: "1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:13:08.558582 1585816 cri.go:96] found id: ""
	I1222 01:13:08.558604 1585816 logs.go:282] 1 containers: [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d]
	I1222 01:13:08.558700 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:13:08.564770 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:13:08.564842 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:13:08.624534 1585816 cri.go:96] found id: ""
	I1222 01:13:08.624557 1585816 logs.go:282] 0 containers: []
	W1222 01:13:08.624566 1585816 logs.go:284] No container was found matching "coredns"
	I1222 01:13:08.624573 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:13:08.624633 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:13:08.652866 1585816 cri.go:96] found id: "ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:13:08.652886 1585816 cri.go:96] found id: ""
	I1222 01:13:08.652894 1585816 logs.go:282] 1 containers: [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e]
	I1222 01:13:08.652953 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:13:08.656889 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:13:08.656959 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:13:08.692658 1585816 cri.go:96] found id: ""
	I1222 01:13:08.692682 1585816 logs.go:282] 0 containers: []
	W1222 01:13:08.692690 1585816 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:13:08.692697 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:13:08.692759 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:13:08.728552 1585816 cri.go:96] found id: "7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:13:08.728631 1585816 cri.go:96] found id: ""
	I1222 01:13:08.728642 1585816 logs.go:282] 1 containers: [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6]
	I1222 01:13:08.728737 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:13:08.732585 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:13:08.732656 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:13:08.760755 1585816 cri.go:96] found id: ""
	I1222 01:13:08.760778 1585816 logs.go:282] 0 containers: []
	W1222 01:13:08.760786 1585816 logs.go:284] No container was found matching "kindnet"
	I1222 01:13:08.760792 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:13:08.760853 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:13:08.788476 1585816 cri.go:96] found id: ""
	I1222 01:13:08.788499 1585816 logs.go:282] 0 containers: []
	W1222 01:13:08.788507 1585816 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:13:08.788519 1585816 logs.go:123] Gathering logs for etcd [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d] ...
	I1222 01:13:08.788531 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:13:08.829437 1585816 logs.go:123] Gathering logs for kube-scheduler [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e] ...
	I1222 01:13:08.829515 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:13:08.875292 1585816 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:13:08.875367 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:13:08.962456 1585816 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:13:08.962531 1585816 logs.go:123] Gathering logs for kube-controller-manager [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6] ...
	I1222 01:13:08.962557 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:13:09.047406 1585816 logs.go:123] Gathering logs for containerd ...
	I1222 01:13:09.047442 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:13:09.097146 1585816 logs.go:123] Gathering logs for container status ...
	I1222 01:13:09.097190 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:13:09.151691 1585816 logs.go:123] Gathering logs for kubelet ...
	I1222 01:13:09.151721 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:13:09.218662 1585816 logs.go:123] Gathering logs for dmesg ...
	I1222 01:13:09.218700 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:13:09.234813 1585816 logs.go:123] Gathering logs for kube-apiserver [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c] ...
	I1222 01:13:09.234845 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:13:11.786220 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:13:11.796660 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:13:11.796732 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:13:11.824857 1585816 cri.go:96] found id: "57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:13:11.824877 1585816 cri.go:96] found id: ""
	I1222 01:13:11.824886 1585816 logs.go:282] 1 containers: [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c]
	I1222 01:13:11.824943 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:13:11.829289 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:13:11.829365 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:13:11.865168 1585816 cri.go:96] found id: "1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:13:11.865188 1585816 cri.go:96] found id: ""
	I1222 01:13:11.865196 1585816 logs.go:282] 1 containers: [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d]
	I1222 01:13:11.865258 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:13:11.869679 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:13:11.869753 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:13:11.900696 1585816 cri.go:96] found id: ""
	I1222 01:13:11.900718 1585816 logs.go:282] 0 containers: []
	W1222 01:13:11.900726 1585816 logs.go:284] No container was found matching "coredns"
	I1222 01:13:11.900737 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:13:11.900798 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:13:11.938216 1585816 cri.go:96] found id: "ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:13:11.938236 1585816 cri.go:96] found id: ""
	I1222 01:13:11.938244 1585816 logs.go:282] 1 containers: [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e]
	I1222 01:13:11.938302 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:13:11.943053 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:13:11.943186 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:13:11.994321 1585816 cri.go:96] found id: ""
	I1222 01:13:11.994344 1585816 logs.go:282] 0 containers: []
	W1222 01:13:11.994352 1585816 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:13:11.994359 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:13:11.994423 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:13:12.095486 1585816 cri.go:96] found id: "7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:13:12.095562 1585816 cri.go:96] found id: ""
	I1222 01:13:12.095586 1585816 logs.go:282] 1 containers: [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6]
	I1222 01:13:12.095671 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:13:12.100275 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:13:12.100431 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:13:12.136404 1585816 cri.go:96] found id: ""
	I1222 01:13:12.136487 1585816 logs.go:282] 0 containers: []
	W1222 01:13:12.136520 1585816 logs.go:284] No container was found matching "kindnet"
	I1222 01:13:12.136541 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:13:12.136647 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:13:12.166766 1585816 cri.go:96] found id: ""
	I1222 01:13:12.166843 1585816 logs.go:282] 0 containers: []
	W1222 01:13:12.166867 1585816 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:13:12.166912 1585816 logs.go:123] Gathering logs for kube-apiserver [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c] ...
	I1222 01:13:12.166943 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:13:12.228889 1585816 logs.go:123] Gathering logs for etcd [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d] ...
	I1222 01:13:12.228971 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:13:12.267577 1585816 logs.go:123] Gathering logs for kubelet ...
	I1222 01:13:12.267659 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:13:12.334040 1585816 logs.go:123] Gathering logs for kube-scheduler [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e] ...
	I1222 01:13:12.334205 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:13:12.379077 1585816 logs.go:123] Gathering logs for kube-controller-manager [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6] ...
	I1222 01:13:12.379151 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:13:12.443504 1585816 logs.go:123] Gathering logs for containerd ...
	I1222 01:13:12.443578 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:13:12.474876 1585816 logs.go:123] Gathering logs for container status ...
	I1222 01:13:12.474918 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:13:12.508605 1585816 logs.go:123] Gathering logs for dmesg ...
	I1222 01:13:12.508684 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:13:12.525715 1585816 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:13:12.525744 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:13:12.589268 1585816 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:13:15.090265 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:13:15.101749 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:13:15.101832 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:13:15.132659 1585816 cri.go:96] found id: "57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:13:15.132683 1585816 cri.go:96] found id: ""
	I1222 01:13:15.132691 1585816 logs.go:282] 1 containers: [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c]
	I1222 01:13:15.132752 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:13:15.137023 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:13:15.137135 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:13:15.163568 1585816 cri.go:96] found id: "1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:13:15.163599 1585816 cri.go:96] found id: ""
	I1222 01:13:15.163608 1585816 logs.go:282] 1 containers: [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d]
	I1222 01:13:15.163669 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:13:15.168026 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:13:15.168112 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:13:15.195999 1585816 cri.go:96] found id: ""
	I1222 01:13:15.196022 1585816 logs.go:282] 0 containers: []
	W1222 01:13:15.196036 1585816 logs.go:284] No container was found matching "coredns"
	I1222 01:13:15.196043 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:13:15.196105 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:13:15.222651 1585816 cri.go:96] found id: "ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:13:15.222677 1585816 cri.go:96] found id: ""
	I1222 01:13:15.222686 1585816 logs.go:282] 1 containers: [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e]
	I1222 01:13:15.222765 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:13:15.226492 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:13:15.226572 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:13:15.252688 1585816 cri.go:96] found id: ""
	I1222 01:13:15.252719 1585816 logs.go:282] 0 containers: []
	W1222 01:13:15.252729 1585816 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:13:15.252736 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:13:15.252805 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:13:15.278978 1585816 cri.go:96] found id: "7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:13:15.279001 1585816 cri.go:96] found id: ""
	I1222 01:13:15.279010 1585816 logs.go:282] 1 containers: [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6]
	I1222 01:13:15.279066 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:13:15.282847 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:13:15.282923 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:13:15.308289 1585816 cri.go:96] found id: ""
	I1222 01:13:15.308315 1585816 logs.go:282] 0 containers: []
	W1222 01:13:15.308324 1585816 logs.go:284] No container was found matching "kindnet"
	I1222 01:13:15.308330 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:13:15.308448 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:13:15.333136 1585816 cri.go:96] found id: ""
	I1222 01:13:15.333163 1585816 logs.go:282] 0 containers: []
	W1222 01:13:15.333171 1585816 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:13:15.333184 1585816 logs.go:123] Gathering logs for kubelet ...
	I1222 01:13:15.333195 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:13:15.392990 1585816 logs.go:123] Gathering logs for kube-apiserver [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c] ...
	I1222 01:13:15.393027 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:13:15.427551 1585816 logs.go:123] Gathering logs for kube-scheduler [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e] ...
	I1222 01:13:15.427587 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:13:15.468977 1585816 logs.go:123] Gathering logs for kube-controller-manager [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6] ...
	I1222 01:13:15.469007 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:13:15.508755 1585816 logs.go:123] Gathering logs for containerd ...
	I1222 01:13:15.508787 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:13:15.538300 1585816 logs.go:123] Gathering logs for container status ...
	I1222 01:13:15.538335 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:13:15.579562 1585816 logs.go:123] Gathering logs for dmesg ...
	I1222 01:13:15.579590 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:13:15.595189 1585816 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:13:15.595219 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:13:15.661456 1585816 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:13:15.661477 1585816 logs.go:123] Gathering logs for etcd [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d] ...
	I1222 01:13:15.661491 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:13:18.196038 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:13:18.207059 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:13:18.207131 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:13:18.233880 1585816 cri.go:96] found id: "57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:13:18.233912 1585816 cri.go:96] found id: ""
	I1222 01:13:18.233920 1585816 logs.go:282] 1 containers: [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c]
	I1222 01:13:18.233980 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:13:18.237893 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:13:18.237970 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:13:18.265134 1585816 cri.go:96] found id: "1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:13:18.265156 1585816 cri.go:96] found id: ""
	I1222 01:13:18.265164 1585816 logs.go:282] 1 containers: [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d]
	I1222 01:13:18.265222 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:13:18.269320 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:13:18.269413 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:13:18.295233 1585816 cri.go:96] found id: ""
	I1222 01:13:18.295262 1585816 logs.go:282] 0 containers: []
	W1222 01:13:18.295271 1585816 logs.go:284] No container was found matching "coredns"
	I1222 01:13:18.295278 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:13:18.295339 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:13:18.321198 1585816 cri.go:96] found id: "ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:13:18.321220 1585816 cri.go:96] found id: ""
	I1222 01:13:18.321228 1585816 logs.go:282] 1 containers: [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e]
	I1222 01:13:18.321286 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:13:18.325150 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:13:18.325224 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:13:18.349681 1585816 cri.go:96] found id: ""
	I1222 01:13:18.349704 1585816 logs.go:282] 0 containers: []
	W1222 01:13:18.349711 1585816 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:13:18.349718 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:13:18.349776 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:13:18.380486 1585816 cri.go:96] found id: "7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:13:18.380509 1585816 cri.go:96] found id: ""
	I1222 01:13:18.380518 1585816 logs.go:282] 1 containers: [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6]
	I1222 01:13:18.380575 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:13:18.384216 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:13:18.384346 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:13:18.412256 1585816 cri.go:96] found id: ""
	I1222 01:13:18.412282 1585816 logs.go:282] 0 containers: []
	W1222 01:13:18.412290 1585816 logs.go:284] No container was found matching "kindnet"
	I1222 01:13:18.412297 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:13:18.412411 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:13:18.447440 1585816 cri.go:96] found id: ""
	I1222 01:13:18.447465 1585816 logs.go:282] 0 containers: []
	W1222 01:13:18.447476 1585816 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:13:18.447489 1585816 logs.go:123] Gathering logs for kube-apiserver [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c] ...
	I1222 01:13:18.447520 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:13:18.499486 1585816 logs.go:123] Gathering logs for kube-scheduler [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e] ...
	I1222 01:13:18.499525 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:13:18.532949 1585816 logs.go:123] Gathering logs for container status ...
	I1222 01:13:18.532978 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:13:18.576031 1585816 logs.go:123] Gathering logs for kubelet ...
	I1222 01:13:18.576060 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:13:18.635257 1585816 logs.go:123] Gathering logs for dmesg ...
	I1222 01:13:18.635294 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:13:18.650505 1585816 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:13:18.650534 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:13:18.721714 1585816 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:13:18.721779 1585816 logs.go:123] Gathering logs for etcd [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d] ...
	I1222 01:13:18.721807 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:13:18.772709 1585816 logs.go:123] Gathering logs for kube-controller-manager [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6] ...
	I1222 01:13:18.772786 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:13:18.817587 1585816 logs.go:123] Gathering logs for containerd ...
	I1222 01:13:18.817676 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:13:21.350521 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:13:21.361219 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:13:21.361294 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:13:21.393696 1585816 cri.go:96] found id: "57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:13:21.393718 1585816 cri.go:96] found id: ""
	I1222 01:13:21.393725 1585816 logs.go:282] 1 containers: [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c]
	I1222 01:13:21.393831 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:13:21.397797 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:13:21.397927 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:13:21.423368 1585816 cri.go:96] found id: "1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:13:21.423392 1585816 cri.go:96] found id: ""
	I1222 01:13:21.423401 1585816 logs.go:282] 1 containers: [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d]
	I1222 01:13:21.423466 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:13:21.427326 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:13:21.427403 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:13:21.453958 1585816 cri.go:96] found id: ""
	I1222 01:13:21.453982 1585816 logs.go:282] 0 containers: []
	W1222 01:13:21.453991 1585816 logs.go:284] No container was found matching "coredns"
	I1222 01:13:21.453998 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:13:21.454061 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:13:21.480989 1585816 cri.go:96] found id: "ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:13:21.481012 1585816 cri.go:96] found id: ""
	I1222 01:13:21.481021 1585816 logs.go:282] 1 containers: [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e]
	I1222 01:13:21.481081 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:13:21.485050 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:13:21.485135 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:13:21.511494 1585816 cri.go:96] found id: ""
	I1222 01:13:21.511521 1585816 logs.go:282] 0 containers: []
	W1222 01:13:21.511530 1585816 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:13:21.511538 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:13:21.511611 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:13:21.540974 1585816 cri.go:96] found id: "7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:13:21.540999 1585816 cri.go:96] found id: ""
	I1222 01:13:21.541007 1585816 logs.go:282] 1 containers: [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6]
	I1222 01:13:21.541069 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:13:21.545135 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:13:21.545220 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:13:21.571439 1585816 cri.go:96] found id: ""
	I1222 01:13:21.571512 1585816 logs.go:282] 0 containers: []
	W1222 01:13:21.571535 1585816 logs.go:284] No container was found matching "kindnet"
	I1222 01:13:21.571558 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:13:21.571669 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:13:21.597207 1585816 cri.go:96] found id: ""
	I1222 01:13:21.597235 1585816 logs.go:282] 0 containers: []
	W1222 01:13:21.597245 1585816 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:13:21.597259 1585816 logs.go:123] Gathering logs for dmesg ...
	I1222 01:13:21.597271 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:13:21.612556 1585816 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:13:21.612640 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:13:21.691691 1585816 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:13:21.691709 1585816 logs.go:123] Gathering logs for kube-apiserver [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c] ...
	I1222 01:13:21.691722 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:13:21.750627 1585816 logs.go:123] Gathering logs for etcd [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d] ...
	I1222 01:13:21.754195 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:13:21.808610 1585816 logs.go:123] Gathering logs for kube-scheduler [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e] ...
	I1222 01:13:21.808701 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:13:21.872392 1585816 logs.go:123] Gathering logs for kube-controller-manager [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6] ...
	I1222 01:13:21.872478 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:13:21.930993 1585816 logs.go:123] Gathering logs for containerd ...
	I1222 01:13:21.931070 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:13:21.966013 1585816 logs.go:123] Gathering logs for container status ...
	I1222 01:13:21.966201 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:13:22.007077 1585816 logs.go:123] Gathering logs for kubelet ...
	I1222 01:13:22.007111 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:13:24.578525 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:13:24.588745 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:13:24.588825 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:13:24.616154 1585816 cri.go:96] found id: "57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:13:24.616179 1585816 cri.go:96] found id: ""
	I1222 01:13:24.616188 1585816 logs.go:282] 1 containers: [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c]
	I1222 01:13:24.616251 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:13:24.620283 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:13:24.620360 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:13:24.645231 1585816 cri.go:96] found id: "1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:13:24.645253 1585816 cri.go:96] found id: ""
	I1222 01:13:24.645262 1585816 logs.go:282] 1 containers: [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d]
	I1222 01:13:24.645317 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:13:24.649199 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:13:24.649274 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:13:24.674625 1585816 cri.go:96] found id: ""
	I1222 01:13:24.674652 1585816 logs.go:282] 0 containers: []
	W1222 01:13:24.674661 1585816 logs.go:284] No container was found matching "coredns"
	I1222 01:13:24.674667 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:13:24.674729 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:13:24.704872 1585816 cri.go:96] found id: "ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:13:24.704896 1585816 cri.go:96] found id: ""
	I1222 01:13:24.704904 1585816 logs.go:282] 1 containers: [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e]
	I1222 01:13:24.704962 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:13:24.708854 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:13:24.708928 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:13:24.737954 1585816 cri.go:96] found id: ""
	I1222 01:13:24.737982 1585816 logs.go:282] 0 containers: []
	W1222 01:13:24.737992 1585816 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:13:24.738012 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:13:24.738105 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:13:24.770490 1585816 cri.go:96] found id: "7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:13:24.770513 1585816 cri.go:96] found id: ""
	I1222 01:13:24.770521 1585816 logs.go:282] 1 containers: [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6]
	I1222 01:13:24.770589 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:13:24.774808 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:13:24.774879 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:13:24.805277 1585816 cri.go:96] found id: ""
	I1222 01:13:24.805298 1585816 logs.go:282] 0 containers: []
	W1222 01:13:24.805306 1585816 logs.go:284] No container was found matching "kindnet"
	I1222 01:13:24.805313 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:13:24.805373 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:13:24.840635 1585816 cri.go:96] found id: ""
	I1222 01:13:24.840657 1585816 logs.go:282] 0 containers: []
	W1222 01:13:24.840665 1585816 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:13:24.840678 1585816 logs.go:123] Gathering logs for kubelet ...
	I1222 01:13:24.840690 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:13:24.898016 1585816 logs.go:123] Gathering logs for dmesg ...
	I1222 01:13:24.898049 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:13:24.913095 1585816 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:13:24.913173 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:13:24.983335 1585816 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:13:24.983355 1585816 logs.go:123] Gathering logs for etcd [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d] ...
	I1222 01:13:24.983369 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:13:25.036233 1585816 logs.go:123] Gathering logs for containerd ...
	I1222 01:13:25.036270 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:13:25.066791 1585816 logs.go:123] Gathering logs for kube-apiserver [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c] ...
	I1222 01:13:25.066831 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:13:25.102216 1585816 logs.go:123] Gathering logs for kube-scheduler [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e] ...
	I1222 01:13:25.102262 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:13:25.142385 1585816 logs.go:123] Gathering logs for kube-controller-manager [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6] ...
	I1222 01:13:25.142423 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:13:25.187053 1585816 logs.go:123] Gathering logs for container status ...
	I1222 01:13:25.187173 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:13:27.716812 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:13:27.727245 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:13:27.727316 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:13:27.769558 1585816 cri.go:96] found id: "57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:13:27.769578 1585816 cri.go:96] found id: ""
	I1222 01:13:27.769586 1585816 logs.go:282] 1 containers: [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c]
	I1222 01:13:27.769647 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:13:27.774050 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:13:27.774187 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:13:27.803882 1585816 cri.go:96] found id: "1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:13:27.803950 1585816 cri.go:96] found id: ""
	I1222 01:13:27.803974 1585816 logs.go:282] 1 containers: [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d]
	I1222 01:13:27.804049 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:13:27.808255 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:13:27.808393 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:13:27.836324 1585816 cri.go:96] found id: ""
	I1222 01:13:27.836352 1585816 logs.go:282] 0 containers: []
	W1222 01:13:27.836405 1585816 logs.go:284] No container was found matching "coredns"
	I1222 01:13:27.836420 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:13:27.836512 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:13:27.864920 1585816 cri.go:96] found id: "ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:13:27.864946 1585816 cri.go:96] found id: ""
	I1222 01:13:27.864955 1585816 logs.go:282] 1 containers: [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e]
	I1222 01:13:27.865040 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:13:27.868999 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:13:27.869096 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:13:27.894364 1585816 cri.go:96] found id: ""
	I1222 01:13:27.894433 1585816 logs.go:282] 0 containers: []
	W1222 01:13:27.894457 1585816 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:13:27.894470 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:13:27.894532 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:13:27.920419 1585816 cri.go:96] found id: "7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:13:27.920441 1585816 cri.go:96] found id: ""
	I1222 01:13:27.920450 1585816 logs.go:282] 1 containers: [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6]
	I1222 01:13:27.920524 1585816 ssh_runner.go:195] Run: which crictl
	I1222 01:13:27.924446 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:13:27.924561 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:13:27.951257 1585816 cri.go:96] found id: ""
	I1222 01:13:27.951286 1585816 logs.go:282] 0 containers: []
	W1222 01:13:27.951296 1585816 logs.go:284] No container was found matching "kindnet"
	I1222 01:13:27.951303 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:13:27.951363 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:13:27.979132 1585816 cri.go:96] found id: ""
	I1222 01:13:27.979159 1585816 logs.go:282] 0 containers: []
	W1222 01:13:27.979168 1585816 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:13:27.979182 1585816 logs.go:123] Gathering logs for kube-apiserver [57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c] ...
	I1222 01:13:27.979193 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c"
	I1222 01:13:28.030388 1585816 logs.go:123] Gathering logs for kube-scheduler [ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e] ...
	I1222 01:13:28.030440 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e"
	I1222 01:13:28.067431 1585816 logs.go:123] Gathering logs for kube-controller-manager [7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6] ...
	I1222 01:13:28.067468 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6"
	I1222 01:13:28.119329 1585816 logs.go:123] Gathering logs for containerd ...
	I1222 01:13:28.119373 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:13:28.148739 1585816 logs.go:123] Gathering logs for etcd [1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d] ...
	I1222 01:13:28.148773 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d"
	I1222 01:13:28.185860 1585816 logs.go:123] Gathering logs for container status ...
	I1222 01:13:28.185890 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:13:28.215766 1585816 logs.go:123] Gathering logs for kubelet ...
	I1222 01:13:28.215796 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:13:28.277172 1585816 logs.go:123] Gathering logs for dmesg ...
	I1222 01:13:28.277209 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:13:28.292576 1585816 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:13:28.292608 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:13:28.359973 1585816 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:13:30.860770 1585816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:13:30.872895 1585816 kubeadm.go:602] duration metric: took 4m4.04184019s to restartPrimaryControlPlane
	W1222 01:13:30.872962 1585816 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1222 01:13:30.873029 1585816 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1222 01:13:31.375747 1585816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 01:13:31.391874 1585816 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1222 01:13:31.401220 1585816 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1222 01:13:31.401288 1585816 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 01:13:31.410734 1585816 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1222 01:13:31.410796 1585816 kubeadm.go:158] found existing configuration files:
	
	I1222 01:13:31.410865 1585816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1222 01:13:31.419343 1585816 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1222 01:13:31.419407 1585816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1222 01:13:31.427173 1585816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1222 01:13:31.434972 1585816 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1222 01:13:31.435040 1585816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 01:13:31.442865 1585816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1222 01:13:31.450665 1585816 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1222 01:13:31.450729 1585816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 01:13:31.458505 1585816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1222 01:13:31.466799 1585816 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1222 01:13:31.466865 1585816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 01:13:31.474507 1585816 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1222 01:13:31.543509 1585816 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1222 01:13:31.543613 1585816 kubeadm.go:319] [preflight] Running pre-flight checks
	I1222 01:13:31.620137 1585816 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1222 01:13:31.620215 1585816 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1222 01:13:31.620252 1585816 kubeadm.go:319] OS: Linux
	I1222 01:13:31.620300 1585816 kubeadm.go:319] CGROUPS_CPU: enabled
	I1222 01:13:31.620355 1585816 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1222 01:13:31.620424 1585816 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1222 01:13:31.620474 1585816 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1222 01:13:31.620522 1585816 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1222 01:13:31.620571 1585816 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1222 01:13:31.620617 1585816 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1222 01:13:31.620666 1585816 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1222 01:13:31.620713 1585816 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1222 01:13:31.693174 1585816 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1222 01:13:31.693288 1585816 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1222 01:13:31.693388 1585816 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1222 01:13:41.767728 1585816 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1222 01:13:41.770652 1585816 out.go:252]   - Generating certificates and keys ...
	I1222 01:13:41.770758 1585816 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1222 01:13:41.770843 1585816 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1222 01:13:41.770933 1585816 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1222 01:13:41.770994 1585816 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1222 01:13:41.771074 1585816 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1222 01:13:41.771128 1585816 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1222 01:13:41.771191 1585816 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1222 01:13:41.771253 1585816 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1222 01:13:41.771358 1585816 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1222 01:13:41.771918 1585816 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1222 01:13:41.772267 1585816 kubeadm.go:319] [certs] Using the existing "sa" key
	I1222 01:13:41.772329 1585816 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1222 01:13:41.850849 1585816 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1222 01:13:41.992911 1585816 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1222 01:13:42.100135 1585816 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1222 01:13:42.234823 1585816 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1222 01:13:42.621074 1585816 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1222 01:13:42.621874 1585816 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1222 01:13:42.626438 1585816 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1222 01:13:42.629933 1585816 out.go:252]   - Booting up control plane ...
	I1222 01:13:42.630051 1585816 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1222 01:13:42.630158 1585816 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1222 01:13:42.630228 1585816 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1222 01:13:42.651438 1585816 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1222 01:13:42.651550 1585816 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1222 01:13:42.659806 1585816 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1222 01:13:42.660303 1585816 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1222 01:13:42.660354 1585816 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1222 01:13:42.795320 1585816 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1222 01:13:42.795444 1585816 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1222 01:17:42.795065 1585816 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000251988s
	I1222 01:17:42.795104 1585816 kubeadm.go:319] 
	I1222 01:17:42.795175 1585816 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1222 01:17:42.795212 1585816 kubeadm.go:319] 	- The kubelet is not running
	I1222 01:17:42.795313 1585816 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1222 01:17:42.795324 1585816 kubeadm.go:319] 
	I1222 01:17:42.795423 1585816 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1222 01:17:42.795457 1585816 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1222 01:17:42.795490 1585816 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1222 01:17:42.795501 1585816 kubeadm.go:319] 
	I1222 01:17:42.799793 1585816 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1222 01:17:42.800254 1585816 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1222 01:17:42.800397 1585816 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1222 01:17:42.800685 1585816 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1222 01:17:42.800710 1585816 kubeadm.go:319] 
	I1222 01:17:42.800806 1585816 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1222 01:17:42.800914 1585816 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000251988s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000251988s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1222 01:17:42.801006 1585816 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1222 01:17:43.216125 1585816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 01:17:43.231256 1585816 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1222 01:17:43.231329 1585816 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 01:17:43.242271 1585816 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1222 01:17:43.242291 1585816 kubeadm.go:158] found existing configuration files:
	
	I1222 01:17:43.242345 1585816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1222 01:17:43.251812 1585816 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1222 01:17:43.251884 1585816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1222 01:17:43.262711 1585816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1222 01:17:43.271407 1585816 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1222 01:17:43.271491 1585816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 01:17:43.280309 1585816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1222 01:17:43.288957 1585816 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1222 01:17:43.289030 1585816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 01:17:43.297023 1585816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1222 01:17:43.305640 1585816 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1222 01:17:43.305712 1585816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 01:17:43.313978 1585816 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1222 01:17:43.354549 1585816 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1222 01:17:43.354872 1585816 kubeadm.go:319] [preflight] Running pre-flight checks
	I1222 01:17:43.436287 1585816 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1222 01:17:43.436370 1585816 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1222 01:17:43.436415 1585816 kubeadm.go:319] OS: Linux
	I1222 01:17:43.436467 1585816 kubeadm.go:319] CGROUPS_CPU: enabled
	I1222 01:17:43.436519 1585816 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1222 01:17:43.436570 1585816 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1222 01:17:43.436622 1585816 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1222 01:17:43.436675 1585816 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1222 01:17:43.436729 1585816 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1222 01:17:43.436778 1585816 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1222 01:17:43.436830 1585816 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1222 01:17:43.436881 1585816 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1222 01:17:43.508846 1585816 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1222 01:17:43.509039 1585816 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1222 01:17:43.509167 1585816 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1222 01:17:43.517341 1585816 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1222 01:17:43.523020 1585816 out.go:252]   - Generating certificates and keys ...
	I1222 01:17:43.523142 1585816 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1222 01:17:43.523226 1585816 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1222 01:17:43.523326 1585816 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1222 01:17:43.523403 1585816 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1222 01:17:43.523490 1585816 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1222 01:17:43.523558 1585816 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1222 01:17:43.523641 1585816 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1222 01:17:43.523717 1585816 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1222 01:17:43.523815 1585816 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1222 01:17:43.523912 1585816 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1222 01:17:43.523967 1585816 kubeadm.go:319] [certs] Using the existing "sa" key
	I1222 01:17:43.524047 1585816 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1222 01:17:43.871578 1585816 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1222 01:17:44.358137 1585816 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1222 01:17:44.534829 1585816 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1222 01:17:44.887593 1585816 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1222 01:17:44.972755 1585816 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1222 01:17:44.973490 1585816 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1222 01:17:44.976098 1585816 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1222 01:17:44.979413 1585816 out.go:252]   - Booting up control plane ...
	I1222 01:17:44.979542 1585816 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1222 01:17:44.979627 1585816 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1222 01:17:44.979695 1585816 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1222 01:17:45.032054 1585816 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1222 01:17:45.032169 1585816 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1222 01:17:45.056526 1585816 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1222 01:17:45.056631 1585816 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1222 01:17:45.056671 1585816 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1222 01:17:45.321645 1585816 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1222 01:17:45.321769 1585816 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1222 01:21:45.322517 1585816 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00128206s
	I1222 01:21:45.322562 1585816 kubeadm.go:319] 
	I1222 01:21:45.322621 1585816 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1222 01:21:45.322672 1585816 kubeadm.go:319] 	- The kubelet is not running
	I1222 01:21:45.322783 1585816 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1222 01:21:45.322793 1585816 kubeadm.go:319] 
	I1222 01:21:45.322899 1585816 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1222 01:21:45.322936 1585816 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1222 01:21:45.322970 1585816 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1222 01:21:45.322979 1585816 kubeadm.go:319] 
	I1222 01:21:45.328729 1585816 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1222 01:21:45.329250 1585816 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1222 01:21:45.329373 1585816 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1222 01:21:45.329605 1585816 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1222 01:21:45.329619 1585816 kubeadm.go:319] 
	I1222 01:21:45.329705 1585816 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1222 01:21:45.329834 1585816 kubeadm.go:403] duration metric: took 12m18.552236915s to StartCluster
	I1222 01:21:45.329894 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:21:45.330304 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:21:45.365868 1585816 cri.go:96] found id: ""
	I1222 01:21:45.365899 1585816 logs.go:282] 0 containers: []
	W1222 01:21:45.365909 1585816 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:21:45.365916 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:21:45.365990 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:21:45.396166 1585816 cri.go:96] found id: ""
	I1222 01:21:45.396194 1585816 logs.go:282] 0 containers: []
	W1222 01:21:45.396205 1585816 logs.go:284] No container was found matching "etcd"
	I1222 01:21:45.396212 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:21:45.396276 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:21:45.421967 1585816 cri.go:96] found id: ""
	I1222 01:21:45.421994 1585816 logs.go:282] 0 containers: []
	W1222 01:21:45.422003 1585816 logs.go:284] No container was found matching "coredns"
	I1222 01:21:45.422009 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:21:45.422129 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:21:45.451493 1585816 cri.go:96] found id: ""
	I1222 01:21:45.451522 1585816 logs.go:282] 0 containers: []
	W1222 01:21:45.451532 1585816 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:21:45.451539 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:21:45.451605 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:21:45.476504 1585816 cri.go:96] found id: ""
	I1222 01:21:45.476579 1585816 logs.go:282] 0 containers: []
	W1222 01:21:45.476601 1585816 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:21:45.476628 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:21:45.476742 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:21:45.515272 1585816 cri.go:96] found id: ""
	I1222 01:21:45.515347 1585816 logs.go:282] 0 containers: []
	W1222 01:21:45.515372 1585816 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:21:45.515397 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:21:45.515518 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:21:45.548539 1585816 cri.go:96] found id: ""
	I1222 01:21:45.548641 1585816 logs.go:282] 0 containers: []
	W1222 01:21:45.548677 1585816 logs.go:284] No container was found matching "kindnet"
	I1222 01:21:45.548711 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:21:45.548813 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:21:45.580158 1585816 cri.go:96] found id: ""
	I1222 01:21:45.580234 1585816 logs.go:282] 0 containers: []
	W1222 01:21:45.580249 1585816 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:21:45.580260 1585816 logs.go:123] Gathering logs for kubelet ...
	I1222 01:21:45.580272 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:21:45.639018 1585816 logs.go:123] Gathering logs for dmesg ...
	I1222 01:21:45.639052 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:21:45.654323 1585816 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:21:45.654353 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:21:45.733002 1585816 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:21:45.733025 1585816 logs.go:123] Gathering logs for containerd ...
	I1222 01:21:45.733038 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:21:45.785161 1585816 logs.go:123] Gathering logs for container status ...
	I1222 01:21:45.785197 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 01:21:45.815129 1585816 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00128206s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1222 01:21:45.815186 1585816 out.go:285] * 
	* 
	W1222 01:21:45.815280 1585816 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00128206s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00128206s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1222 01:21:45.815297 1585816 out.go:285] * 
	* 
	W1222 01:21:45.817452 1585816 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1222 01:21:45.826168 1585816 out.go:203] 
	W1222 01:21:45.831295 1585816 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00128206s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00128206s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1222 01:21:45.831347 1585816 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1222 01:21:45.831376 1585816 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1222 01:21:45.835393 1585816 out.go:203] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-linux-arm64 start -p kubernetes-upgrade-108800 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd : exit status 109
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-108800 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-108800 version --output=json: exit status 1 (125.447161ms)

                                                
                                                
-- stdout --
	{
	  "clientVersion": {
	    "major": "1",
	    "minor": "33",
	    "gitVersion": "v1.33.2",
	    "gitCommit": "a57b6f7709f6c2722b92f07b8b4c48210a51fc40",
	    "gitTreeState": "clean",
	    "buildDate": "2025-06-17T18:41:31Z",
	    "goVersion": "go1.24.4",
	    "compiler": "gc",
	    "platform": "linux/arm64"
	  },
	  "kustomizeVersion": "v5.6.0"
	}

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.76.2:8443 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:615: *** TestKubernetesUpgrade FAILED at 2025-12-22 01:21:46.528007075 +0000 UTC m=+4662.609534336
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestKubernetesUpgrade]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect kubernetes-upgrade-108800
helpers_test.go:244: (dbg) docker inspect kubernetes-upgrade-108800:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1796d9c6321db085acd22837947aa945ec00bb0671a1f4841de02ce39070ed70",
	        "Created": "2025-12-22T01:08:40.328033331Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1585971,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-22T01:09:13.282018271Z",
	            "FinishedAt": "2025-12-22T01:09:12.274022683Z"
	        },
	        "Image": "sha256:065a636b8735485f57df2b02ed6532902f189a9c5dc304ae0ae68a778e1c9b2c",
	        "ResolvConfPath": "/var/lib/docker/containers/1796d9c6321db085acd22837947aa945ec00bb0671a1f4841de02ce39070ed70/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1796d9c6321db085acd22837947aa945ec00bb0671a1f4841de02ce39070ed70/hostname",
	        "HostsPath": "/var/lib/docker/containers/1796d9c6321db085acd22837947aa945ec00bb0671a1f4841de02ce39070ed70/hosts",
	        "LogPath": "/var/lib/docker/containers/1796d9c6321db085acd22837947aa945ec00bb0671a1f4841de02ce39070ed70/1796d9c6321db085acd22837947aa945ec00bb0671a1f4841de02ce39070ed70-json.log",
	        "Name": "/kubernetes-upgrade-108800",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-108800:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-108800",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1796d9c6321db085acd22837947aa945ec00bb0671a1f4841de02ce39070ed70",
	                "LowerDir": "/var/lib/docker/overlay2/a6142b8f2743c0e8f4a081b14f8d1df8695e15eef197aa9c57f76739c3f06c42-init/diff:/var/lib/docker/overlay2/fdfc0dccbb3b6766b7981a5669907f62e120bbe774767ccbee34e6115374625f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a6142b8f2743c0e8f4a081b14f8d1df8695e15eef197aa9c57f76739c3f06c42/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a6142b8f2743c0e8f4a081b14f8d1df8695e15eef197aa9c57f76739c3f06c42/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a6142b8f2743c0e8f4a081b14f8d1df8695e15eef197aa9c57f76739c3f06c42/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-108800",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-108800/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-108800",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-108800",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-108800",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "03c055c666bb95c44a71a64e9219ac1eaadb60ae627bab8409b3f9fa4e6d7143",
	            "SandboxKey": "/var/run/docker/netns/03c055c666bb",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38595"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38596"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38599"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38597"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38598"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-108800": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ba:89:67:21:c8:97",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5272a4b7532c4153f381f641797072fe8f344335126f67d0a939bfb0b8836481",
	                    "EndpointID": "9afaf74943f6431c1e7418bab3b0f314a9f1d76e213f8db7a0677caa1311e8af",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "kubernetes-upgrade-108800",
	                        "1796d9c6321d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p kubernetes-upgrade-108800 -n kubernetes-upgrade-108800
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p kubernetes-upgrade-108800 -n kubernetes-upgrade-108800: exit status 2 (326.528123ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-108800 logs -n 25
helpers_test.go:261: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                     ARGS                                                     │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-892179 sudo systemctl status kubelet --all --full --no-pager                                       │ cilium-892179          │ jenkins │ v1.37.0 │ 22 Dec 25 01:19 UTC │                     │
	│ ssh     │ -p cilium-892179 sudo systemctl cat kubelet --no-pager                                                       │ cilium-892179          │ jenkins │ v1.37.0 │ 22 Dec 25 01:19 UTC │                     │
	│ ssh     │ -p cilium-892179 sudo journalctl -xeu kubelet --all --full --no-pager                                        │ cilium-892179          │ jenkins │ v1.37.0 │ 22 Dec 25 01:19 UTC │                     │
	│ ssh     │ -p cilium-892179 sudo cat /etc/kubernetes/kubelet.conf                                                       │ cilium-892179          │ jenkins │ v1.37.0 │ 22 Dec 25 01:19 UTC │                     │
	│ ssh     │ -p cilium-892179 sudo cat /var/lib/kubelet/config.yaml                                                       │ cilium-892179          │ jenkins │ v1.37.0 │ 22 Dec 25 01:19 UTC │                     │
	│ ssh     │ -p cilium-892179 sudo systemctl status docker --all --full --no-pager                                        │ cilium-892179          │ jenkins │ v1.37.0 │ 22 Dec 25 01:19 UTC │                     │
	│ ssh     │ -p cilium-892179 sudo systemctl cat docker --no-pager                                                        │ cilium-892179          │ jenkins │ v1.37.0 │ 22 Dec 25 01:19 UTC │                     │
	│ ssh     │ -p cilium-892179 sudo cat /etc/docker/daemon.json                                                            │ cilium-892179          │ jenkins │ v1.37.0 │ 22 Dec 25 01:19 UTC │                     │
	│ ssh     │ -p cilium-892179 sudo docker system info                                                                     │ cilium-892179          │ jenkins │ v1.37.0 │ 22 Dec 25 01:19 UTC │                     │
	│ ssh     │ -p cilium-892179 sudo systemctl status cri-docker --all --full --no-pager                                    │ cilium-892179          │ jenkins │ v1.37.0 │ 22 Dec 25 01:19 UTC │                     │
	│ ssh     │ -p cilium-892179 sudo systemctl cat cri-docker --no-pager                                                    │ cilium-892179          │ jenkins │ v1.37.0 │ 22 Dec 25 01:19 UTC │                     │
	│ ssh     │ -p cilium-892179 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                               │ cilium-892179          │ jenkins │ v1.37.0 │ 22 Dec 25 01:19 UTC │                     │
	│ ssh     │ -p cilium-892179 sudo cat /usr/lib/systemd/system/cri-docker.service                                         │ cilium-892179          │ jenkins │ v1.37.0 │ 22 Dec 25 01:19 UTC │                     │
	│ ssh     │ -p cilium-892179 sudo cri-dockerd --version                                                                  │ cilium-892179          │ jenkins │ v1.37.0 │ 22 Dec 25 01:19 UTC │                     │
	│ ssh     │ -p cilium-892179 sudo systemctl status containerd --all --full --no-pager                                    │ cilium-892179          │ jenkins │ v1.37.0 │ 22 Dec 25 01:19 UTC │                     │
	│ ssh     │ -p cilium-892179 sudo systemctl cat containerd --no-pager                                                    │ cilium-892179          │ jenkins │ v1.37.0 │ 22 Dec 25 01:19 UTC │                     │
	│ ssh     │ -p cilium-892179 sudo cat /lib/systemd/system/containerd.service                                             │ cilium-892179          │ jenkins │ v1.37.0 │ 22 Dec 25 01:19 UTC │                     │
	│ ssh     │ -p cilium-892179 sudo cat /etc/containerd/config.toml                                                        │ cilium-892179          │ jenkins │ v1.37.0 │ 22 Dec 25 01:19 UTC │                     │
	│ ssh     │ -p cilium-892179 sudo containerd config dump                                                                 │ cilium-892179          │ jenkins │ v1.37.0 │ 22 Dec 25 01:19 UTC │                     │
	│ ssh     │ -p cilium-892179 sudo systemctl status crio --all --full --no-pager                                          │ cilium-892179          │ jenkins │ v1.37.0 │ 22 Dec 25 01:19 UTC │                     │
	│ ssh     │ -p cilium-892179 sudo systemctl cat crio --no-pager                                                          │ cilium-892179          │ jenkins │ v1.37.0 │ 22 Dec 25 01:19 UTC │                     │
	│ ssh     │ -p cilium-892179 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                │ cilium-892179          │ jenkins │ v1.37.0 │ 22 Dec 25 01:19 UTC │                     │
	│ ssh     │ -p cilium-892179 sudo crio config                                                                            │ cilium-892179          │ jenkins │ v1.37.0 │ 22 Dec 25 01:19 UTC │                     │
	│ delete  │ -p cilium-892179                                                                                             │ cilium-892179          │ jenkins │ v1.37.0 │ 22 Dec 25 01:19 UTC │ 22 Dec 25 01:19 UTC │
	│ start   │ -p cert-expiration-007057 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd │ cert-expiration-007057 │ jenkins │ v1.37.0 │ 22 Dec 25 01:19 UTC │ 22 Dec 25 01:20 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/22 01:19:47
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1222 01:19:47.325673 1633728 out.go:360] Setting OutFile to fd 1 ...
	I1222 01:19:47.325771 1633728 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:19:47.325779 1633728 out.go:374] Setting ErrFile to fd 2...
	I1222 01:19:47.325783 1633728 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:19:47.326050 1633728 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1395000/.minikube/bin
	I1222 01:19:47.326464 1633728 out.go:368] Setting JSON to false
	I1222 01:19:47.327294 1633728 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":115340,"bootTime":1766251047,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1222 01:19:47.327346 1633728 start.go:143] virtualization:  
	I1222 01:19:47.330911 1633728 out.go:179] * [cert-expiration-007057] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1222 01:19:47.333969 1633728 out.go:179]   - MINIKUBE_LOCATION=22179
	I1222 01:19:47.334041 1633728 notify.go:221] Checking for updates...
	I1222 01:19:47.339908 1633728 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 01:19:47.342883 1633728 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-1395000/kubeconfig
	I1222 01:19:47.345800 1633728 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1395000/.minikube
	I1222 01:19:47.348732 1633728 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1222 01:19:47.351594 1633728 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 01:19:47.355191 1633728 config.go:182] Loaded profile config "kubernetes-upgrade-108800": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1222 01:19:47.355285 1633728 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 01:19:47.386134 1633728 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1222 01:19:47.386265 1633728 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:19:47.445961 1633728 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 01:19:47.436685321 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:19:47.446052 1633728 docker.go:319] overlay module found
	I1222 01:19:47.449492 1633728 out.go:179] * Using the docker driver based on user configuration
	I1222 01:19:47.452235 1633728 start.go:309] selected driver: docker
	I1222 01:19:47.452243 1633728 start.go:928] validating driver "docker" against <nil>
	I1222 01:19:47.452254 1633728 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 01:19:47.453001 1633728 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:19:47.521584 1633728 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 01:19:47.511892123 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:19:47.521724 1633728 start_flags.go:329] no existing cluster config was found, will generate one from the flags 
	I1222 01:19:47.521946 1633728 start_flags.go:977] Wait components to verify : map[apiserver:true system_pods:true]
	I1222 01:19:47.524783 1633728 out.go:179] * Using Docker driver with root privileges
	I1222 01:19:47.527417 1633728 cni.go:84] Creating CNI manager for ""
	I1222 01:19:47.527475 1633728 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1222 01:19:47.527484 1633728 start_flags.go:338] Found "CNI" CNI - setting NetworkPlugin=cni
	I1222 01:19:47.527551 1633728 start.go:353] cluster config:
	{Name:cert-expiration-007057 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:cert-expiration-007057 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:19:47.530562 1633728 out.go:179] * Starting "cert-expiration-007057" primary control-plane node in "cert-expiration-007057" cluster
	I1222 01:19:47.533335 1633728 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1222 01:19:47.536140 1633728 out.go:179] * Pulling base image v0.0.48-1766219634-22260 ...
	I1222 01:19:47.538954 1633728 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime containerd
	I1222 01:19:47.538993 1633728 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-containerd-overlay2-arm64.tar.lz4
	I1222 01:19:47.539001 1633728 cache.go:65] Caching tarball of preloaded images
	I1222 01:19:47.539100 1633728 preload.go:251] Found /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1222 01:19:47.539109 1633728 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on containerd
	I1222 01:19:47.539219 1633728 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/cert-expiration-007057/config.json ...
	I1222 01:19:47.539238 1633728 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/cert-expiration-007057/config.json: {Name:mkff747b81897f9c203bc2c228e2ec5a02101b47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:19:47.539417 1633728 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1222 01:19:47.563959 1633728 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon, skipping pull
	I1222 01:19:47.563971 1633728 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 exists in daemon, skipping load
	I1222 01:19:47.563988 1633728 cache.go:243] Successfully downloaded all kic artifacts
	I1222 01:19:47.564020 1633728 start.go:360] acquireMachinesLock for cert-expiration-007057: {Name:mke670a35ea80d237a263918e7ba4185a40ebbc9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:19:47.564139 1633728 start.go:364] duration metric: took 104.288µs to acquireMachinesLock for "cert-expiration-007057"
	I1222 01:19:47.564164 1633728 start.go:93] Provisioning new machine with config: &{Name:cert-expiration-007057 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:cert-expiration-007057 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1222 01:19:47.564230 1633728 start.go:125] createHost starting for "" (driver="docker")
	I1222 01:19:47.567570 1633728 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1222 01:19:47.567826 1633728 start.go:159] libmachine.API.Create for "cert-expiration-007057" (driver="docker")
	I1222 01:19:47.567868 1633728 client.go:173] LocalClient.Create starting
	I1222 01:19:47.567983 1633728 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem
	I1222 01:19:47.568026 1633728 main.go:144] libmachine: Decoding PEM data...
	I1222 01:19:47.568047 1633728 main.go:144] libmachine: Parsing certificate...
	I1222 01:19:47.568104 1633728 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem
	I1222 01:19:47.568129 1633728 main.go:144] libmachine: Decoding PEM data...
	I1222 01:19:47.568140 1633728 main.go:144] libmachine: Parsing certificate...
	I1222 01:19:47.568598 1633728 cli_runner.go:164] Run: docker network inspect cert-expiration-007057 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1222 01:19:47.585087 1633728 cli_runner.go:211] docker network inspect cert-expiration-007057 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1222 01:19:47.585219 1633728 network_create.go:284] running [docker network inspect cert-expiration-007057] to gather additional debugging logs...
	I1222 01:19:47.585236 1633728 cli_runner.go:164] Run: docker network inspect cert-expiration-007057
	W1222 01:19:47.600856 1633728 cli_runner.go:211] docker network inspect cert-expiration-007057 returned with exit code 1
	I1222 01:19:47.600884 1633728 network_create.go:287] error running [docker network inspect cert-expiration-007057]: docker network inspect cert-expiration-007057: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network cert-expiration-007057 not found
	I1222 01:19:47.600895 1633728 network_create.go:289] output of [docker network inspect cert-expiration-007057]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network cert-expiration-007057 not found
	
	** /stderr **
	I1222 01:19:47.601006 1633728 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 01:19:47.620039 1633728 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-4235b17e4b35 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:fe:6e:e2:e5:a2:3e} reservation:<nil>}
	I1222 01:19:47.620392 1633728 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-f8ef64c3e2a9 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:e2:c4:2a:ab:ba:65} reservation:<nil>}
	I1222 01:19:47.620712 1633728 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-abf79bf53636 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:c2:97:f6:b2:21:18} reservation:<nil>}
	I1222 01:19:47.621068 1633728 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-5272a4b7532c IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ee:e0:8a:c3:9a:c4} reservation:<nil>}
	I1222 01:19:47.621571 1633728 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a45580}
	I1222 01:19:47.621586 1633728 network_create.go:124] attempt to create docker network cert-expiration-007057 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1222 01:19:47.621647 1633728 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cert-expiration-007057 cert-expiration-007057
	I1222 01:19:47.683246 1633728 network_create.go:108] docker network cert-expiration-007057 192.168.85.0/24 created
	I1222 01:19:47.683269 1633728 kic.go:121] calculated static IP "192.168.85.2" for the "cert-expiration-007057" container
	I1222 01:19:47.683351 1633728 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1222 01:19:47.699202 1633728 cli_runner.go:164] Run: docker volume create cert-expiration-007057 --label name.minikube.sigs.k8s.io=cert-expiration-007057 --label created_by.minikube.sigs.k8s.io=true
	I1222 01:19:47.717102 1633728 oci.go:103] Successfully created a docker volume cert-expiration-007057
	I1222 01:19:47.717224 1633728 cli_runner.go:164] Run: docker run --rm --name cert-expiration-007057-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-expiration-007057 --entrypoint /usr/bin/test -v cert-expiration-007057:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -d /var/lib
	I1222 01:19:48.286491 1633728 oci.go:107] Successfully prepared a docker volume cert-expiration-007057
	I1222 01:19:48.286541 1633728 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime containerd
	I1222 01:19:48.286550 1633728 kic.go:194] Starting extracting preloaded images to volume ...
	I1222 01:19:48.286630 1633728 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v cert-expiration-007057:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -I lz4 -xf /preloaded.tar -C /extractDir
	I1222 01:19:52.171947 1633728 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v cert-expiration-007057:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -I lz4 -xf /preloaded.tar -C /extractDir: (3.885282724s)
	I1222 01:19:52.171968 1633728 kic.go:203] duration metric: took 3.885415698s to extract preloaded images to volume ...
	W1222 01:19:52.172122 1633728 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1222 01:19:52.172223 1633728 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1222 01:19:52.228120 1633728 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cert-expiration-007057 --name cert-expiration-007057 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-expiration-007057 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cert-expiration-007057 --network cert-expiration-007057 --ip 192.168.85.2 --volume cert-expiration-007057:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5
	I1222 01:19:52.531943 1633728 cli_runner.go:164] Run: docker container inspect cert-expiration-007057 --format={{.State.Running}}
	I1222 01:19:52.560483 1633728 cli_runner.go:164] Run: docker container inspect cert-expiration-007057 --format={{.State.Status}}
	I1222 01:19:52.589576 1633728 cli_runner.go:164] Run: docker exec cert-expiration-007057 stat /var/lib/dpkg/alternatives/iptables
	I1222 01:19:52.638949 1633728 oci.go:144] the created container "cert-expiration-007057" has a running status.
	I1222 01:19:52.638967 1633728 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/cert-expiration-007057/id_rsa...
	I1222 01:19:52.674896 1633728 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/cert-expiration-007057/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1222 01:19:52.700229 1633728 cli_runner.go:164] Run: docker container inspect cert-expiration-007057 --format={{.State.Status}}
	I1222 01:19:52.728175 1633728 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1222 01:19:52.728186 1633728 kic_runner.go:114] Args: [docker exec --privileged cert-expiration-007057 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1222 01:19:52.789020 1633728 cli_runner.go:164] Run: docker container inspect cert-expiration-007057 --format={{.State.Status}}
	I1222 01:19:52.818283 1633728 machine.go:94] provisionDockerMachine start ...
	I1222 01:19:52.818372 1633728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-007057
	I1222 01:19:52.844784 1633728 main.go:144] libmachine: Using SSH client type: native
	I1222 01:19:52.845113 1633728 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38645 <nil> <nil>}
	I1222 01:19:52.845120 1633728 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 01:19:52.846416 1633728 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1222 01:19:55.981755 1633728 main.go:144] libmachine: SSH cmd err, output: <nil>: cert-expiration-007057
	
	I1222 01:19:55.981771 1633728 ubuntu.go:182] provisioning hostname "cert-expiration-007057"
	I1222 01:19:55.981849 1633728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-007057
	I1222 01:19:56.000559 1633728 main.go:144] libmachine: Using SSH client type: native
	I1222 01:19:56.000931 1633728 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38645 <nil> <nil>}
	I1222 01:19:56.000948 1633728 main.go:144] libmachine: About to run SSH command:
	sudo hostname cert-expiration-007057 && echo "cert-expiration-007057" | sudo tee /etc/hostname
	I1222 01:19:56.147330 1633728 main.go:144] libmachine: SSH cmd err, output: <nil>: cert-expiration-007057
	
	I1222 01:19:56.147404 1633728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-007057
	I1222 01:19:56.165086 1633728 main.go:144] libmachine: Using SSH client type: native
	I1222 01:19:56.165398 1633728 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38645 <nil> <nil>}
	I1222 01:19:56.165413 1633728 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-007057' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-007057/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-007057' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1222 01:19:56.298654 1633728 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 01:19:56.298670 1633728 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22179-1395000/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-1395000/.minikube}
	I1222 01:19:56.298694 1633728 ubuntu.go:190] setting up certificates
	I1222 01:19:56.298711 1633728 provision.go:84] configureAuth start
	I1222 01:19:56.298777 1633728 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-007057
	I1222 01:19:56.316681 1633728 provision.go:143] copyHostCerts
	I1222 01:19:56.316736 1633728 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.pem, removing ...
	I1222 01:19:56.316743 1633728 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.pem
	I1222 01:19:56.316824 1633728 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.pem (1082 bytes)
	I1222 01:19:56.316947 1633728 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1395000/.minikube/cert.pem, removing ...
	I1222 01:19:56.316951 1633728 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1395000/.minikube/cert.pem
	I1222 01:19:56.316978 1633728 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-1395000/.minikube/cert.pem (1123 bytes)
	I1222 01:19:56.317034 1633728 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1395000/.minikube/key.pem, removing ...
	I1222 01:19:56.317037 1633728 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1395000/.minikube/key.pem
	I1222 01:19:56.317060 1633728 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-1395000/.minikube/key.pem (1679 bytes)
	I1222 01:19:56.317110 1633728 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-007057 san=[127.0.0.1 192.168.85.2 cert-expiration-007057 localhost minikube]
	I1222 01:19:56.540638 1633728 provision.go:177] copyRemoteCerts
	I1222 01:19:56.540701 1633728 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1222 01:19:56.540741 1633728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-007057
	I1222 01:19:56.558219 1633728 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38645 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/cert-expiration-007057/id_rsa Username:docker}
	I1222 01:19:56.658158 1633728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1222 01:19:56.676091 1633728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1222 01:19:56.694255 1633728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1222 01:19:56.712358 1633728 provision.go:87] duration metric: took 413.625071ms to configureAuth
	I1222 01:19:56.712377 1633728 ubuntu.go:206] setting minikube options for container-runtime
	I1222 01:19:56.712561 1633728 config.go:182] Loaded profile config "cert-expiration-007057": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1222 01:19:56.712567 1633728 machine.go:97] duration metric: took 3.894274295s to provisionDockerMachine
	I1222 01:19:56.712572 1633728 client.go:176] duration metric: took 9.144699799s to LocalClient.Create
	I1222 01:19:56.712584 1633728 start.go:167] duration metric: took 9.144761026s to libmachine.API.Create "cert-expiration-007057"
	I1222 01:19:56.712591 1633728 start.go:293] postStartSetup for "cert-expiration-007057" (driver="docker")
	I1222 01:19:56.712598 1633728 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1222 01:19:56.712643 1633728 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1222 01:19:56.712688 1633728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-007057
	I1222 01:19:56.733668 1633728 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38645 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/cert-expiration-007057/id_rsa Username:docker}
	I1222 01:19:56.830365 1633728 ssh_runner.go:195] Run: cat /etc/os-release
	I1222 01:19:56.833719 1633728 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1222 01:19:56.833738 1633728 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1222 01:19:56.833748 1633728 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1395000/.minikube/addons for local assets ...
	I1222 01:19:56.833803 1633728 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1395000/.minikube/files for local assets ...
	I1222 01:19:56.833883 1633728 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem -> 13968642.pem in /etc/ssl/certs
	I1222 01:19:56.833983 1633728 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1222 01:19:56.841492 1633728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem --> /etc/ssl/certs/13968642.pem (1708 bytes)
	I1222 01:19:56.859121 1633728 start.go:296] duration metric: took 146.516475ms for postStartSetup
	I1222 01:19:56.859484 1633728 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-007057
	I1222 01:19:56.876824 1633728 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/cert-expiration-007057/config.json ...
	I1222 01:19:56.877109 1633728 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 01:19:56.877151 1633728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-007057
	I1222 01:19:56.894052 1633728 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38645 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/cert-expiration-007057/id_rsa Username:docker}
	I1222 01:19:56.987342 1633728 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1222 01:19:56.992263 1633728 start.go:128] duration metric: took 9.428016375s to createHost
	I1222 01:19:56.992278 1633728 start.go:83] releasing machines lock for "cert-expiration-007057", held for 9.428131232s
	I1222 01:19:56.992359 1633728 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-007057
	I1222 01:19:57.013707 1633728 ssh_runner.go:195] Run: cat /version.json
	I1222 01:19:57.013754 1633728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-007057
	I1222 01:19:57.014011 1633728 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1222 01:19:57.014065 1633728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-007057
	I1222 01:19:57.034963 1633728 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38645 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/cert-expiration-007057/id_rsa Username:docker}
	I1222 01:19:57.055827 1633728 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38645 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/cert-expiration-007057/id_rsa Username:docker}
	I1222 01:19:57.130675 1633728 ssh_runner.go:195] Run: systemctl --version
	I1222 01:19:57.235008 1633728 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1222 01:19:57.241009 1633728 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1222 01:19:57.241070 1633728 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1222 01:19:57.277288 1633728 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1222 01:19:57.277302 1633728 start.go:496] detecting cgroup driver to use...
	I1222 01:19:57.277333 1633728 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 01:19:57.277426 1633728 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1222 01:19:57.298416 1633728 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1222 01:19:57.313108 1633728 docker.go:218] disabling cri-docker service (if available) ...
	I1222 01:19:57.313181 1633728 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1222 01:19:57.331856 1633728 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1222 01:19:57.351287 1633728 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1222 01:19:57.467547 1633728 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1222 01:19:57.599355 1633728 docker.go:234] disabling docker service ...
	I1222 01:19:57.599429 1633728 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1222 01:19:57.623384 1633728 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1222 01:19:57.636895 1633728 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1222 01:19:57.760386 1633728 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1222 01:19:57.893853 1633728 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1222 01:19:57.907191 1633728 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 01:19:57.921190 1633728 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1222 01:19:57.931127 1633728 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1222 01:19:57.940848 1633728 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1222 01:19:57.940921 1633728 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1222 01:19:57.950372 1633728 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1222 01:19:57.959385 1633728 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1222 01:19:57.968902 1633728 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1222 01:19:57.977902 1633728 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1222 01:19:57.988178 1633728 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1222 01:19:58.004343 1633728 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1222 01:19:58.015291 1633728 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1222 01:19:58.026164 1633728 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1222 01:19:58.035097 1633728 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1222 01:19:58.042951 1633728 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:19:58.159376 1633728 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1222 01:19:58.292773 1633728 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1222 01:19:58.292837 1633728 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1222 01:19:58.296870 1633728 start.go:564] Will wait 60s for crictl version
	I1222 01:19:58.296930 1633728 ssh_runner.go:195] Run: which crictl
	I1222 01:19:58.301155 1633728 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1222 01:19:58.330430 1633728 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.1
	RuntimeApiVersion:  v1
	I1222 01:19:58.330494 1633728 ssh_runner.go:195] Run: containerd --version
	I1222 01:19:58.352811 1633728 ssh_runner.go:195] Run: containerd --version
	I1222 01:19:58.377957 1633728 out.go:179] * Preparing Kubernetes v1.34.3 on containerd 2.2.1 ...
	I1222 01:19:58.380945 1633728 cli_runner.go:164] Run: docker network inspect cert-expiration-007057 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 01:19:58.399518 1633728 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1222 01:19:58.403341 1633728 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 01:19:58.412782 1633728 kubeadm.go:884] updating cluster {Name:cert-expiration-007057 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:cert-expiration-007057 Namespace:default APIServerHAVIP: APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1222 01:19:58.412884 1633728 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime containerd
	I1222 01:19:58.412957 1633728 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 01:19:58.437127 1633728 containerd.go:627] all images are preloaded for containerd runtime.
	I1222 01:19:58.437140 1633728 containerd.go:534] Images already preloaded, skipping extraction
	I1222 01:19:58.437203 1633728 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 01:19:58.462245 1633728 containerd.go:627] all images are preloaded for containerd runtime.
	I1222 01:19:58.462258 1633728 cache_images.go:86] Images are preloaded, skipping loading
	I1222 01:19:58.462265 1633728 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.3 containerd true true} ...
	I1222 01:19:58.462361 1633728 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=cert-expiration-007057 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:cert-expiration-007057 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1222 01:19:58.462424 1633728 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
	I1222 01:19:58.490865 1633728 cni.go:84] Creating CNI manager for ""
	I1222 01:19:58.490876 1633728 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1222 01:19:58.490887 1633728 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1222 01:19:58.490909 1633728 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-expiration-007057 NodeName:cert-expiration-007057 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1222 01:19:58.491012 1633728 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "cert-expiration-007057"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1222 01:19:58.491078 1633728 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1222 01:19:58.498844 1633728 binaries.go:51] Found k8s binaries, skipping transfer
	I1222 01:19:58.498905 1633728 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1222 01:19:58.506450 1633728 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1222 01:19:58.519806 1633728 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1222 01:19:58.532763 1633728 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1222 01:19:58.545303 1633728 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1222 01:19:58.548901 1633728 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 01:19:58.558312 1633728 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:19:58.672065 1633728 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 01:19:58.688070 1633728 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/cert-expiration-007057 for IP: 192.168.85.2
	I1222 01:19:58.688081 1633728 certs.go:195] generating shared ca certs ...
	I1222 01:19:58.688095 1633728 certs.go:227] acquiring lock for ca certs: {Name:mk4e1172e73c8d9b926824a39d7e920772302ed7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:19:58.688240 1633728 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.key
	I1222 01:19:58.688279 1633728 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.key
	I1222 01:19:58.688285 1633728 certs.go:257] generating profile certs ...
	I1222 01:19:58.688348 1633728 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/cert-expiration-007057/client.key
	I1222 01:19:58.688358 1633728 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/cert-expiration-007057/client.crt with IP's: []
	I1222 01:19:58.895350 1633728 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/cert-expiration-007057/client.crt ...
	I1222 01:19:58.895366 1633728 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/cert-expiration-007057/client.crt: {Name:mke2784a43b1107af73a14d8f94ef5f267a4f10b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:19:58.895565 1633728 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/cert-expiration-007057/client.key ...
	I1222 01:19:58.895573 1633728 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/cert-expiration-007057/client.key: {Name:mk427cc5b8db95b75a1bfd0b673da17dcf4155ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:19:58.895666 1633728 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/cert-expiration-007057/apiserver.key.c08c3d2c
	I1222 01:19:58.895689 1633728 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/cert-expiration-007057/apiserver.crt.c08c3d2c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1222 01:19:59.038065 1633728 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/cert-expiration-007057/apiserver.crt.c08c3d2c ...
	I1222 01:19:59.038087 1633728 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/cert-expiration-007057/apiserver.crt.c08c3d2c: {Name:mk4739e1e9a2ad3faf9d29033bff91ee04701b2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:19:59.038306 1633728 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/cert-expiration-007057/apiserver.key.c08c3d2c ...
	I1222 01:19:59.038316 1633728 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/cert-expiration-007057/apiserver.key.c08c3d2c: {Name:mkaf2967ea06c5b89710589554fa9abb9e964971 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:19:59.038408 1633728 certs.go:382] copying /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/cert-expiration-007057/apiserver.crt.c08c3d2c -> /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/cert-expiration-007057/apiserver.crt
	I1222 01:19:59.038487 1633728 certs.go:386] copying /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/cert-expiration-007057/apiserver.key.c08c3d2c -> /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/cert-expiration-007057/apiserver.key
	I1222 01:19:59.038540 1633728 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/cert-expiration-007057/proxy-client.key
	I1222 01:19:59.038552 1633728 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/cert-expiration-007057/proxy-client.crt with IP's: []
	I1222 01:19:59.612246 1633728 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/cert-expiration-007057/proxy-client.crt ...
	I1222 01:19:59.612263 1633728 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/cert-expiration-007057/proxy-client.crt: {Name:mka025768872876fd55f9f011f7038d75083f381 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:19:59.612477 1633728 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/cert-expiration-007057/proxy-client.key ...
	I1222 01:19:59.612485 1633728 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/cert-expiration-007057/proxy-client.key: {Name:mk8fb91508745e9c7b0d4c05932ef1c8f6b5e602 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:19:59.612672 1633728 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/1396864.pem (1338 bytes)
	W1222 01:19:59.612715 1633728 certs.go:480] ignoring /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/1396864_empty.pem, impossibly tiny 0 bytes
	I1222 01:19:59.612722 1633728 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca-key.pem (1675 bytes)
	I1222 01:19:59.612748 1633728 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem (1082 bytes)
	I1222 01:19:59.612779 1633728 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem (1123 bytes)
	I1222 01:19:59.612803 1633728 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/key.pem (1679 bytes)
	I1222 01:19:59.612845 1633728 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem (1708 bytes)
	I1222 01:19:59.613419 1633728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1222 01:19:59.632699 1633728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1222 01:19:59.651671 1633728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1222 01:19:59.669763 1633728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1222 01:19:59.688055 1633728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/cert-expiration-007057/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1222 01:19:59.705234 1633728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/cert-expiration-007057/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1222 01:19:59.723035 1633728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/cert-expiration-007057/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1222 01:19:59.740469 1633728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/cert-expiration-007057/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1222 01:19:59.757662 1633728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem --> /usr/share/ca-certificates/13968642.pem (1708 bytes)
	I1222 01:19:59.777002 1633728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1222 01:19:59.795095 1633728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/1396864.pem --> /usr/share/ca-certificates/1396864.pem (1338 bytes)
	I1222 01:19:59.813378 1633728 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1222 01:19:59.827506 1633728 ssh_runner.go:195] Run: openssl version
	I1222 01:19:59.834153 1633728 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/13968642.pem
	I1222 01:19:59.841837 1633728 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/13968642.pem /etc/ssl/certs/13968642.pem
	I1222 01:19:59.849630 1633728 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13968642.pem
	I1222 01:19:59.853739 1633728 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 00:13 /usr/share/ca-certificates/13968642.pem
	I1222 01:19:59.853801 1633728 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13968642.pem
	I1222 01:19:59.895919 1633728 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1222 01:19:59.903364 1633728 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/13968642.pem /etc/ssl/certs/3ec20f2e.0
	I1222 01:19:59.910698 1633728 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:19:59.918028 1633728 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1222 01:19:59.925944 1633728 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:19:59.929985 1633728 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 00:04 /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:19:59.930042 1633728 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:19:59.971517 1633728 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1222 01:19:59.979339 1633728 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1222 01:19:59.986950 1633728 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1396864.pem
	I1222 01:19:59.994836 1633728 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1396864.pem /etc/ssl/certs/1396864.pem
	I1222 01:20:00.028862 1633728 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1396864.pem
	I1222 01:20:00.055110 1633728 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 00:13 /usr/share/ca-certificates/1396864.pem
	I1222 01:20:00.055195 1633728 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1396864.pem
	I1222 01:20:00.239513 1633728 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1222 01:20:00.398809 1633728 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1396864.pem /etc/ssl/certs/51391683.0
	I1222 01:20:00.487313 1633728 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 01:20:00.527061 1633728 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1222 01:20:00.527117 1633728 kubeadm.go:401] StartCluster: {Name:cert-expiration-007057 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:cert-expiration-007057 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:20:00.527192 1633728 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1222 01:20:00.527256 1633728 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 01:20:00.594615 1633728 cri.go:96] found id: ""
	I1222 01:20:00.594703 1633728 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1222 01:20:00.608901 1633728 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1222 01:20:00.630389 1633728 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1222 01:20:00.630456 1633728 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 01:20:00.651679 1633728 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1222 01:20:00.651699 1633728 kubeadm.go:158] found existing configuration files:
	
	I1222 01:20:00.651759 1633728 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1222 01:20:00.665164 1633728 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1222 01:20:00.665264 1633728 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1222 01:20:00.680427 1633728 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1222 01:20:00.691389 1633728 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1222 01:20:00.691602 1633728 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 01:20:00.701480 1633728 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1222 01:20:00.712580 1633728 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1222 01:20:00.712655 1633728 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 01:20:00.723324 1633728 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1222 01:20:00.736795 1633728 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1222 01:20:00.736871 1633728 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 01:20:00.748193 1633728 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1222 01:20:00.800272 1633728 kubeadm.go:319] [init] Using Kubernetes version: v1.34.3
	I1222 01:20:00.800335 1633728 kubeadm.go:319] [preflight] Running pre-flight checks
	I1222 01:20:00.827146 1633728 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1222 01:20:00.827212 1633728 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1222 01:20:00.827247 1633728 kubeadm.go:319] OS: Linux
	I1222 01:20:00.827293 1633728 kubeadm.go:319] CGROUPS_CPU: enabled
	I1222 01:20:00.827342 1633728 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1222 01:20:00.827388 1633728 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1222 01:20:00.827435 1633728 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1222 01:20:00.827482 1633728 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1222 01:20:00.827540 1633728 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1222 01:20:00.827584 1633728 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1222 01:20:00.827631 1633728 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1222 01:20:00.827675 1633728 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1222 01:20:00.903963 1633728 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1222 01:20:00.904098 1633728 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1222 01:20:00.904217 1633728 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1222 01:20:00.909391 1633728 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1222 01:20:00.916132 1633728 out.go:252]   - Generating certificates and keys ...
	I1222 01:20:00.916250 1633728 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1222 01:20:00.916337 1633728 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1222 01:20:01.128430 1633728 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1222 01:20:01.427409 1633728 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1222 01:20:01.712766 1633728 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1222 01:20:01.931262 1633728 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1222 01:20:02.217719 1633728 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1222 01:20:02.217926 1633728 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [cert-expiration-007057 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1222 01:20:02.305251 1633728 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1222 01:20:02.305385 1633728 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [cert-expiration-007057 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1222 01:20:02.875471 1633728 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1222 01:20:03.689625 1633728 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1222 01:20:04.926221 1633728 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1222 01:20:04.926329 1633728 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1222 01:20:05.359186 1633728 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1222 01:20:05.748852 1633728 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1222 01:20:05.919619 1633728 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1222 01:20:06.309009 1633728 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1222 01:20:07.082131 1633728 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1222 01:20:07.082797 1633728 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1222 01:20:07.085497 1633728 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1222 01:20:07.089425 1633728 out.go:252]   - Booting up control plane ...
	I1222 01:20:07.089534 1633728 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1222 01:20:07.089616 1633728 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1222 01:20:07.089686 1633728 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1222 01:20:07.108400 1633728 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1222 01:20:07.108516 1633728 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1222 01:20:07.116288 1633728 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1222 01:20:07.116777 1633728 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1222 01:20:07.117004 1633728 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1222 01:20:07.258629 1633728 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1222 01:20:07.258743 1633728 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1222 01:20:08.763390 1633728 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.50508096s
	I1222 01:20:08.767264 1633728 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1222 01:20:08.767354 1633728 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1222 01:20:08.768059 1633728 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1222 01:20:08.768538 1633728 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1222 01:20:11.998326 1633728 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.229599753s
	I1222 01:20:13.545671 1633728 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.776454037s
	I1222 01:20:15.269660 1633728 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.501796428s
	I1222 01:20:15.302441 1633728 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1222 01:20:15.315764 1633728 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1222 01:20:15.329531 1633728 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1222 01:20:15.329730 1633728 kubeadm.go:319] [mark-control-plane] Marking the node cert-expiration-007057 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1222 01:20:15.341681 1633728 kubeadm.go:319] [bootstrap-token] Using token: k8igdf.gr9xe8005tkusgya
	I1222 01:20:15.344691 1633728 out.go:252]   - Configuring RBAC rules ...
	I1222 01:20:15.344809 1633728 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1222 01:20:15.349004 1633728 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1222 01:20:15.359888 1633728 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1222 01:20:15.364355 1633728 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1222 01:20:15.368526 1633728 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1222 01:20:15.373627 1633728 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1222 01:20:15.677725 1633728 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1222 01:20:16.103934 1633728 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1222 01:20:16.676632 1633728 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1222 01:20:16.678257 1633728 kubeadm.go:319] 
	I1222 01:20:16.678326 1633728 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1222 01:20:16.678331 1633728 kubeadm.go:319] 
	I1222 01:20:16.678407 1633728 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1222 01:20:16.678410 1633728 kubeadm.go:319] 
	I1222 01:20:16.678434 1633728 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1222 01:20:16.678492 1633728 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1222 01:20:16.678541 1633728 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1222 01:20:16.678544 1633728 kubeadm.go:319] 
	I1222 01:20:16.678597 1633728 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1222 01:20:16.678599 1633728 kubeadm.go:319] 
	I1222 01:20:16.678646 1633728 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1222 01:20:16.678649 1633728 kubeadm.go:319] 
	I1222 01:20:16.678700 1633728 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1222 01:20:16.678774 1633728 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1222 01:20:16.678841 1633728 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1222 01:20:16.678844 1633728 kubeadm.go:319] 
	I1222 01:20:16.678927 1633728 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1222 01:20:16.679022 1633728 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1222 01:20:16.679026 1633728 kubeadm.go:319] 
	I1222 01:20:16.679109 1633728 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token k8igdf.gr9xe8005tkusgya \
	I1222 01:20:16.679211 1633728 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:55a75a4878aa9ec4082586970e7505f4492fcc0138e33ff8e472e16a7e145535 \
	I1222 01:20:16.679230 1633728 kubeadm.go:319] 	--control-plane 
	I1222 01:20:16.679232 1633728 kubeadm.go:319] 
	I1222 01:20:16.679329 1633728 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1222 01:20:16.679333 1633728 kubeadm.go:319] 
	I1222 01:20:16.679413 1633728 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token k8igdf.gr9xe8005tkusgya \
	I1222 01:20:16.679514 1633728 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:55a75a4878aa9ec4082586970e7505f4492fcc0138e33ff8e472e16a7e145535 
	I1222 01:20:16.682123 1633728 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1222 01:20:16.682342 1633728 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1222 01:20:16.682445 1633728 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1222 01:20:16.682460 1633728 cni.go:84] Creating CNI manager for ""
	I1222 01:20:16.682467 1633728 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1222 01:20:16.685814 1633728 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1222 01:20:16.688549 1633728 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1222 01:20:16.693013 1633728 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.3/kubectl ...
	I1222 01:20:16.693025 1633728 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1222 01:20:16.709404 1633728 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1222 01:20:17.041621 1633728 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1222 01:20:17.041744 1633728 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 01:20:17.041821 1633728 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes cert-expiration-007057 minikube.k8s.io/updated_at=2025_12_22T01_20_17_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=606da7122583b5a79b82859b38097457cda6198c minikube.k8s.io/name=cert-expiration-007057 minikube.k8s.io/primary=true
	I1222 01:20:17.251515 1633728 ops.go:34] apiserver oom_adj: -16
	I1222 01:20:17.251536 1633728 kubeadm.go:1114] duration metric: took 209.84708ms to wait for elevateKubeSystemPrivileges
	I1222 01:20:17.251549 1633728 kubeadm.go:403] duration metric: took 16.724445746s to StartCluster
	I1222 01:20:17.251564 1633728 settings.go:142] acquiring lock: {Name:mk10fbc51e5384d725fd46ab36048358281cb87b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:20:17.251630 1633728 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22179-1395000/kubeconfig
	I1222 01:20:17.252548 1633728 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/kubeconfig: {Name:mk08a9ce05fb3d2aec66c2d4776a38c1f64c42df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:20:17.252768 1633728 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1222 01:20:17.252847 1633728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1222 01:20:17.253088 1633728 config.go:182] Loaded profile config "cert-expiration-007057": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1222 01:20:17.253117 1633728 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1222 01:20:17.253174 1633728 addons.go:70] Setting storage-provisioner=true in profile "cert-expiration-007057"
	I1222 01:20:17.253190 1633728 addons.go:239] Setting addon storage-provisioner=true in "cert-expiration-007057"
	I1222 01:20:17.253209 1633728 host.go:66] Checking if "cert-expiration-007057" exists ...
	I1222 01:20:17.253670 1633728 cli_runner.go:164] Run: docker container inspect cert-expiration-007057 --format={{.State.Status}}
	I1222 01:20:17.253939 1633728 addons.go:70] Setting default-storageclass=true in profile "cert-expiration-007057"
	I1222 01:20:17.253953 1633728 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "cert-expiration-007057"
	I1222 01:20:17.254250 1633728 cli_runner.go:164] Run: docker container inspect cert-expiration-007057 --format={{.State.Status}}
	I1222 01:20:17.257719 1633728 out.go:179] * Verifying Kubernetes components...
	I1222 01:20:17.262573 1633728 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:20:17.289458 1633728 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1222 01:20:17.292433 1633728 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 01:20:17.292446 1633728 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1222 01:20:17.292536 1633728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-007057
	I1222 01:20:17.294203 1633728 addons.go:239] Setting addon default-storageclass=true in "cert-expiration-007057"
	I1222 01:20:17.294231 1633728 host.go:66] Checking if "cert-expiration-007057" exists ...
	I1222 01:20:17.294685 1633728 cli_runner.go:164] Run: docker container inspect cert-expiration-007057 --format={{.State.Status}}
	I1222 01:20:17.329407 1633728 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38645 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/cert-expiration-007057/id_rsa Username:docker}
	I1222 01:20:17.338941 1633728 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1222 01:20:17.338954 1633728 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1222 01:20:17.339017 1633728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-007057
	I1222 01:20:17.366641 1633728 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38645 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/cert-expiration-007057/id_rsa Username:docker}
	I1222 01:20:17.540782 1633728 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 01:20:17.540944 1633728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1222 01:20:17.545522 1633728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 01:20:17.615319 1633728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1222 01:20:18.000081 1633728 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1222 01:20:18.001077 1633728 api_server.go:52] waiting for apiserver process to appear ...
	I1222 01:20:18.001141 1633728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:20:18.241235 1633728 api_server.go:72] duration metric: took 988.439826ms to wait for apiserver process to appear ...
	I1222 01:20:18.241249 1633728 api_server.go:88] waiting for apiserver healthz status ...
	I1222 01:20:18.241266 1633728 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1222 01:20:18.260636 1633728 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1222 01:20:18.263205 1633728 api_server.go:141] control plane version: v1.34.3
	I1222 01:20:18.263224 1633728 api_server.go:131] duration metric: took 21.970323ms to wait for apiserver health ...
	I1222 01:20:18.263250 1633728 system_pods.go:43] waiting for kube-system pods to appear ...
	I1222 01:20:18.272240 1633728 system_pods.go:59] 5 kube-system pods found
	I1222 01:20:18.272284 1633728 system_pods.go:61] "etcd-cert-expiration-007057" [eac73d0e-f83d-4b83-9e52-bfa97ecd7aaf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1222 01:20:18.272302 1633728 system_pods.go:61] "kube-apiserver-cert-expiration-007057" [6b59aedb-f77d-4c68-8a06-5114c492b93b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1222 01:20:18.272326 1633728 system_pods.go:61] "kube-controller-manager-cert-expiration-007057" [d2e91e97-32e5-4836-8112-64959d181e03] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1222 01:20:18.272332 1633728 system_pods.go:61] "kube-scheduler-cert-expiration-007057" [b3be9b91-0e7f-4cf8-879c-8990cade305a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1222 01:20:18.272337 1633728 system_pods.go:61] "storage-provisioner" [b74fe139-2dcf-49dc-82af-ebc035db35f4] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1222 01:20:18.272343 1633728 system_pods.go:74] duration metric: took 9.086313ms to wait for pod list to return data ...
	I1222 01:20:18.272353 1633728 kubeadm.go:587] duration metric: took 1.019564689s to wait for: map[apiserver:true system_pods:true]
	I1222 01:20:18.272364 1633728 node_conditions.go:102] verifying NodePressure condition ...
	I1222 01:20:18.275889 1633728 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1222 01:20:18.278931 1633728 addons.go:530] duration metric: took 1.02578914s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1222 01:20:18.280018 1633728 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1222 01:20:18.280050 1633728 node_conditions.go:123] node cpu capacity is 2
	I1222 01:20:18.280063 1633728 node_conditions.go:105] duration metric: took 7.694694ms to run NodePressure ...
	I1222 01:20:18.280075 1633728 start.go:242] waiting for startup goroutines ...
	I1222 01:20:18.503973 1633728 kapi.go:214] "coredns" deployment in "kube-system" namespace and "cert-expiration-007057" context rescaled to 1 replicas
	I1222 01:20:18.503995 1633728 start.go:247] waiting for cluster config update ...
	I1222 01:20:18.504010 1633728 start.go:256] writing updated cluster config ...
	I1222 01:20:18.504299 1633728 ssh_runner.go:195] Run: rm -f paused
	I1222 01:20:18.582809 1633728 start.go:625] kubectl: 1.33.2, cluster: 1.34.3 (minor skew: 1)
	I1222 01:20:18.585867 1633728 out.go:179] * Done! kubectl is now configured to use "cert-expiration-007057" cluster and "default" namespace by default
	I1222 01:21:45.322517 1585816 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00128206s
	I1222 01:21:45.322562 1585816 kubeadm.go:319] 
	I1222 01:21:45.322621 1585816 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1222 01:21:45.322672 1585816 kubeadm.go:319] 	- The kubelet is not running
	I1222 01:21:45.322783 1585816 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1222 01:21:45.322793 1585816 kubeadm.go:319] 
	I1222 01:21:45.322899 1585816 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1222 01:21:45.322936 1585816 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1222 01:21:45.322970 1585816 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1222 01:21:45.322979 1585816 kubeadm.go:319] 
	I1222 01:21:45.328729 1585816 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1222 01:21:45.329250 1585816 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1222 01:21:45.329373 1585816 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1222 01:21:45.329605 1585816 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1222 01:21:45.329619 1585816 kubeadm.go:319] 
	I1222 01:21:45.329705 1585816 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1222 01:21:45.329834 1585816 kubeadm.go:403] duration metric: took 12m18.552236915s to StartCluster
	I1222 01:21:45.329894 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:21:45.330304 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:21:45.365868 1585816 cri.go:96] found id: ""
	I1222 01:21:45.365899 1585816 logs.go:282] 0 containers: []
	W1222 01:21:45.365909 1585816 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:21:45.365916 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:21:45.365990 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:21:45.396166 1585816 cri.go:96] found id: ""
	I1222 01:21:45.396194 1585816 logs.go:282] 0 containers: []
	W1222 01:21:45.396205 1585816 logs.go:284] No container was found matching "etcd"
	I1222 01:21:45.396212 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:21:45.396276 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:21:45.421967 1585816 cri.go:96] found id: ""
	I1222 01:21:45.421994 1585816 logs.go:282] 0 containers: []
	W1222 01:21:45.422003 1585816 logs.go:284] No container was found matching "coredns"
	I1222 01:21:45.422009 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:21:45.422129 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:21:45.451493 1585816 cri.go:96] found id: ""
	I1222 01:21:45.451522 1585816 logs.go:282] 0 containers: []
	W1222 01:21:45.451532 1585816 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:21:45.451539 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:21:45.451605 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:21:45.476504 1585816 cri.go:96] found id: ""
	I1222 01:21:45.476579 1585816 logs.go:282] 0 containers: []
	W1222 01:21:45.476601 1585816 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:21:45.476628 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:21:45.476742 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:21:45.515272 1585816 cri.go:96] found id: ""
	I1222 01:21:45.515347 1585816 logs.go:282] 0 containers: []
	W1222 01:21:45.515372 1585816 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:21:45.515397 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:21:45.515518 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:21:45.548539 1585816 cri.go:96] found id: ""
	I1222 01:21:45.548641 1585816 logs.go:282] 0 containers: []
	W1222 01:21:45.548677 1585816 logs.go:284] No container was found matching "kindnet"
	I1222 01:21:45.548711 1585816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1222 01:21:45.548813 1585816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 01:21:45.580158 1585816 cri.go:96] found id: ""
	I1222 01:21:45.580234 1585816 logs.go:282] 0 containers: []
	W1222 01:21:45.580249 1585816 logs.go:284] No container was found matching "storage-provisioner"
	I1222 01:21:45.580260 1585816 logs.go:123] Gathering logs for kubelet ...
	I1222 01:21:45.580272 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:21:45.639018 1585816 logs.go:123] Gathering logs for dmesg ...
	I1222 01:21:45.639052 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:21:45.654323 1585816 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:21:45.654353 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:21:45.733002 1585816 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:21:45.733025 1585816 logs.go:123] Gathering logs for containerd ...
	I1222 01:21:45.733038 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:21:45.785161 1585816 logs.go:123] Gathering logs for container status ...
	I1222 01:21:45.785197 1585816 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 01:21:45.815129 1585816 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00128206s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1222 01:21:45.815186 1585816 out.go:285] * 
	W1222 01:21:45.815280 1585816 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00128206s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1222 01:21:45.815297 1585816 out.go:285] * 
	W1222 01:21:45.817452 1585816 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1222 01:21:45.826168 1585816 out.go:203] 
	W1222 01:21:45.831295 1585816 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00128206s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1222 01:21:45.831347 1585816 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1222 01:21:45.831376 1585816 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1222 01:21:45.835393 1585816 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 22 01:13:39 kubernetes-upgrade-108800 containerd[554]: time="2025-12-22T01:13:39.060246533Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 22 01:13:39 kubernetes-upgrade-108800 containerd[554]: time="2025-12-22T01:13:39.061797349Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"21168808\" in 1.385615244s"
	Dec 22 01:13:39 kubernetes-upgrade-108800 containerd[554]: time="2025-12-22T01:13:39.061958500Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\""
	Dec 22 01:13:39 kubernetes-upgrade-108800 containerd[554]: time="2025-12-22T01:13:39.063355624Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\""
	Dec 22 01:13:39 kubernetes-upgrade-108800 containerd[554]: time="2025-12-22T01:13:39.725092252Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
	Dec 22 01:13:39 kubernetes-upgrade-108800 containerd[554]: time="2025-12-22T01:13:39.727286815Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268709"
	Dec 22 01:13:39 kubernetes-upgrade-108800 containerd[554]: time="2025-12-22T01:13:39.729458676Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
	Dec 22 01:13:39 kubernetes-upgrade-108800 containerd[554]: time="2025-12-22T01:13:39.732971627Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
	Dec 22 01:13:39 kubernetes-upgrade-108800 containerd[554]: time="2025-12-22T01:13:39.733666263Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 670.255845ms"
	Dec 22 01:13:39 kubernetes-upgrade-108800 containerd[554]: time="2025-12-22T01:13:39.733709029Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\""
	Dec 22 01:13:39 kubernetes-upgrade-108800 containerd[554]: time="2025-12-22T01:13:39.734846869Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\""
	Dec 22 01:13:41 kubernetes-upgrade-108800 containerd[554]: time="2025-12-22T01:13:41.756100687Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 22 01:13:41 kubernetes-upgrade-108800 containerd[554]: time="2025-12-22T01:13:41.758532324Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=21753021"
	Dec 22 01:13:41 kubernetes-upgrade-108800 containerd[554]: time="2025-12-22T01:13:41.761779156Z" level=info msg="ImageCreate event name:\"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 22 01:13:41 kubernetes-upgrade-108800 containerd[554]: time="2025-12-22T01:13:41.765807041Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 22 01:13:41 kubernetes-upgrade-108800 containerd[554]: time="2025-12-22T01:13:41.766836335Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"21749640\" in 2.031952584s"
	Dec 22 01:13:41 kubernetes-upgrade-108800 containerd[554]: time="2025-12-22T01:13:41.766884171Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57\""
	Dec 22 01:18:31 kubernetes-upgrade-108800 containerd[554]: time="2025-12-22T01:18:31.309756968Z" level=info msg="container event discarded" container=ed3066381b65668d0b5595eb77c595d36cc9f977935f95eb6a0abf69445b505e type=CONTAINER_DELETED_EVENT
	Dec 22 01:18:31 kubernetes-upgrade-108800 containerd[554]: time="2025-12-22T01:18:31.323992662Z" level=info msg="container event discarded" container=e510edcdecf3ffba888dd5bde5df7c4214b7cfacb525110d34055d6d07e23de4 type=CONTAINER_DELETED_EVENT
	Dec 22 01:18:31 kubernetes-upgrade-108800 containerd[554]: time="2025-12-22T01:18:31.336258935Z" level=info msg="container event discarded" container=7f77decf941392af855fe251f18c9595b7cabfd04ccf76ed25c78bf60077ace6 type=CONTAINER_DELETED_EVENT
	Dec 22 01:18:31 kubernetes-upgrade-108800 containerd[554]: time="2025-12-22T01:18:31.336319547Z" level=info msg="container event discarded" container=87a5dc83d7562a3e990388e4228191ab6c8731d9e8670a02c22c1c3289cc79e0 type=CONTAINER_DELETED_EVENT
	Dec 22 01:18:31 kubernetes-upgrade-108800 containerd[554]: time="2025-12-22T01:18:31.355560687Z" level=info msg="container event discarded" container=1e630fe87d844014ca97cf131fcb96f46ac5b98f121e2aed1209d6568a54b00d type=CONTAINER_DELETED_EVENT
	Dec 22 01:18:31 kubernetes-upgrade-108800 containerd[554]: time="2025-12-22T01:18:31.355622226Z" level=info msg="container event discarded" container=7784c4c62559909c4df3c0ca310f5855e3f0d2fc9609392bdc8db834eab8ccef type=CONTAINER_DELETED_EVENT
	Dec 22 01:18:31 kubernetes-upgrade-108800 containerd[554]: time="2025-12-22T01:18:31.370941655Z" level=info msg="container event discarded" container=57e0ac88fe583c96d59bd879a5c779502aa7a8c7bdca66f11efc9aa933ad360c type=CONTAINER_DELETED_EVENT
	Dec 22 01:18:31 kubernetes-upgrade-108800 containerd[554]: time="2025-12-22T01:18:31.371000790Z" level=info msg="container event discarded" container=496eddd25e5149131a9fa954e37ee47b63222494331ccb739a15dd362b953143 type=CONTAINER_DELETED_EVENT
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec22 00:03] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 01:21:47 up 1 day,  8:04,  0 user,  load average: 0.54, 1.60, 2.12
	Linux kubernetes-upgrade-108800 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 22 01:21:44 kubernetes-upgrade-108800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 01:21:44 kubernetes-upgrade-108800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 318.
	Dec 22 01:21:44 kubernetes-upgrade-108800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:21:44 kubernetes-upgrade-108800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:21:44 kubernetes-upgrade-108800 kubelet[14105]: E1222 01:21:44.780770   14105 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 01:21:44 kubernetes-upgrade-108800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 01:21:44 kubernetes-upgrade-108800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 01:21:45 kubernetes-upgrade-108800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 22 01:21:45 kubernetes-upgrade-108800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:21:45 kubernetes-upgrade-108800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:21:45 kubernetes-upgrade-108800 kubelet[14156]: E1222 01:21:45.554720   14156 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 01:21:45 kubernetes-upgrade-108800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 01:21:45 kubernetes-upgrade-108800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 01:21:46 kubernetes-upgrade-108800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 22 01:21:46 kubernetes-upgrade-108800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:21:46 kubernetes-upgrade-108800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:21:46 kubernetes-upgrade-108800 kubelet[14204]: E1222 01:21:46.320899   14204 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 01:21:46 kubernetes-upgrade-108800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 01:21:46 kubernetes-upgrade-108800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 01:21:46 kubernetes-upgrade-108800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 22 01:21:46 kubernetes-upgrade-108800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:21:46 kubernetes-upgrade-108800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:21:47 kubernetes-upgrade-108800 kubelet[14224]: E1222 01:21:47.056878   14224 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 01:21:47 kubernetes-upgrade-108800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 01:21:47 kubernetes-upgrade-108800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p kubernetes-upgrade-108800 -n kubernetes-upgrade-108800
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p kubernetes-upgrade-108800 -n kubernetes-upgrade-108800: exit status 2 (342.79806ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "kubernetes-upgrade-108800" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:176: Cleaning up "kubernetes-upgrade-108800" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-108800
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-108800: (2.257627214s)
--- FAIL: TestKubernetesUpgrade (796.88s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (513.48s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-154186 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1
E1222 01:26:29.153497 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/addons-984861/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p no-preload-154186 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1: exit status 109 (8m31.910093675s)

                                                
                                                
-- stdout --
	* [no-preload-154186] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22179
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22179-1395000/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1395000/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "no-preload-154186" primary control-plane node in "no-preload-154186" cluster
	* Pulling base image v0.0.48-1766219634-22260 ...
	* Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1222 01:26:02.236446 1661698 out.go:360] Setting OutFile to fd 1 ...
	I1222 01:26:02.237044 1661698 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:26:02.237080 1661698 out.go:374] Setting ErrFile to fd 2...
	I1222 01:26:02.237099 1661698 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:26:02.237422 1661698 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1395000/.minikube/bin
	I1222 01:26:02.237945 1661698 out.go:368] Setting JSON to false
	I1222 01:26:02.238997 1661698 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":115715,"bootTime":1766251047,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1222 01:26:02.239097 1661698 start.go:143] virtualization:  
	I1222 01:26:02.243389 1661698 out.go:179] * [no-preload-154186] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1222 01:26:02.246601 1661698 out.go:179]   - MINIKUBE_LOCATION=22179
	I1222 01:26:02.246686 1661698 notify.go:221] Checking for updates...
	I1222 01:26:02.252698 1661698 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 01:26:02.255843 1661698 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-1395000/kubeconfig
	I1222 01:26:02.258994 1661698 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1395000/.minikube
	I1222 01:26:02.262219 1661698 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1222 01:26:02.265531 1661698 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 01:26:02.268954 1661698 config.go:182] Loaded profile config "embed-certs-980842": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1222 01:26:02.269149 1661698 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 01:26:02.319070 1661698 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1222 01:26:02.319204 1661698 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:26:02.405139 1661698 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-22 01:26:02.39513126 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:26:02.405239 1661698 docker.go:319] overlay module found
	I1222 01:26:02.408290 1661698 out.go:179] * Using the docker driver based on user configuration
	I1222 01:26:02.411223 1661698 start.go:309] selected driver: docker
	I1222 01:26:02.411242 1661698 start.go:928] validating driver "docker" against <nil>
	I1222 01:26:02.411255 1661698 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 01:26:02.411966 1661698 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:26:02.487514 1661698 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-22 01:26:02.476781967 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:26:02.487691 1661698 start_flags.go:329] no existing cluster config was found, will generate one from the flags 
	I1222 01:26:02.487964 1661698 start_flags.go:995] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1222 01:26:02.491126 1661698 out.go:179] * Using Docker driver with root privileges
	I1222 01:26:02.494023 1661698 cni.go:84] Creating CNI manager for ""
	I1222 01:26:02.494193 1661698 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1222 01:26:02.494209 1661698 start_flags.go:338] Found "CNI" CNI - setting NetworkPlugin=cni
	I1222 01:26:02.494310 1661698 start.go:353] cluster config:
	{Name:no-preload-154186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-154186 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:26:02.497593 1661698 out.go:179] * Starting "no-preload-154186" primary control-plane node in "no-preload-154186" cluster
	I1222 01:26:02.500464 1661698 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1222 01:26:02.503452 1661698 out.go:179] * Pulling base image v0.0.48-1766219634-22260 ...
	I1222 01:26:02.506326 1661698 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1222 01:26:02.506482 1661698 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/no-preload-154186/config.json ...
	I1222 01:26:02.506529 1661698 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/no-preload-154186/config.json: {Name:mkf00e2a3ee3adc30b437fb94a7c680e5b3b6049 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:26:02.506737 1661698 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1222 01:26:02.507048 1661698 cache.go:107] acquiring lock: {Name:mk3bde21e751b3aa3caf7a41c8a37e36cec6e7cf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:26:02.507122 1661698 cache.go:115] /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1222 01:26:02.507138 1661698 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 101.564µs
	I1222 01:26:02.507146 1661698 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1222 01:26:02.507164 1661698 cache.go:107] acquiring lock: {Name:mk4a15c8225bf94a78b514d4142ea41c6bb91faa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:26:02.507248 1661698 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1222 01:26:02.507602 1661698 cache.go:107] acquiring lock: {Name:mkeb24b7f997eb1a1a3d59e2a2d68597fffc7c36 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:26:02.507707 1661698 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1222 01:26:02.507943 1661698 cache.go:107] acquiring lock: {Name:mkf2939c17635a47347d3721871a718b69a7a19c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:26:02.508052 1661698 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1222 01:26:02.508263 1661698 cache.go:107] acquiring lock: {Name:mk1daf2f1163a462fd1f82e12b9d4b157cffc772 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:26:02.508367 1661698 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1222 01:26:02.508622 1661698 cache.go:107] acquiring lock: {Name:mk48171dacff6bbfb8016f0e5908022e81e1ea85 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:26:02.508692 1661698 cache.go:115] /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1222 01:26:02.508706 1661698 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 87.886µs
	I1222 01:26:02.508713 1661698 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1222 01:26:02.508730 1661698 cache.go:107] acquiring lock: {Name:mkc08548a3ab9782a3dcbbb4e211790535cb9d14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:26:02.508809 1661698 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.6-0
	I1222 01:26:02.509047 1661698 cache.go:107] acquiring lock: {Name:mk2f653a9914a185aaa3299c67a548da6098dcf3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:26:02.509147 1661698 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1222 01:26:02.512501 1661698 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1222 01:26:02.513016 1661698 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1222 01:26:02.513247 1661698 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1222 01:26:02.513423 1661698 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.6-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.6-0
	I1222 01:26:02.513573 1661698 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1222 01:26:02.513869 1661698 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1222 01:26:02.555689 1661698 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon, skipping pull
	I1222 01:26:02.555716 1661698 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 exists in daemon, skipping load
	I1222 01:26:02.555754 1661698 cache.go:243] Successfully downloaded all kic artifacts
	I1222 01:26:02.555789 1661698 start.go:360] acquireMachinesLock for no-preload-154186: {Name:mk9dee4f9b1c44d5e40729915965cd9e314df88f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:26:02.555947 1661698 start.go:364] duration metric: took 127.246µs to acquireMachinesLock for "no-preload-154186"
	I1222 01:26:02.555991 1661698 start.go:93] Provisioning new machine with config: &{Name:no-preload-154186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-154186 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1222 01:26:02.556146 1661698 start.go:125] createHost starting for "" (driver="docker")
	I1222 01:26:02.559892 1661698 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1222 01:26:02.560176 1661698 start.go:159] libmachine.API.Create for "no-preload-154186" (driver="docker")
	I1222 01:26:02.560225 1661698 client.go:173] LocalClient.Create starting
	I1222 01:26:02.560330 1661698 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem
	I1222 01:26:02.560390 1661698 main.go:144] libmachine: Decoding PEM data...
	I1222 01:26:02.560421 1661698 main.go:144] libmachine: Parsing certificate...
	I1222 01:26:02.560496 1661698 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem
	I1222 01:26:02.560560 1661698 main.go:144] libmachine: Decoding PEM data...
	I1222 01:26:02.560588 1661698 main.go:144] libmachine: Parsing certificate...
	I1222 01:26:02.561118 1661698 cli_runner.go:164] Run: docker network inspect no-preload-154186 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1222 01:26:02.589068 1661698 cli_runner.go:211] docker network inspect no-preload-154186 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1222 01:26:02.589152 1661698 network_create.go:284] running [docker network inspect no-preload-154186] to gather additional debugging logs...
	I1222 01:26:02.589174 1661698 cli_runner.go:164] Run: docker network inspect no-preload-154186
	W1222 01:26:02.613833 1661698 cli_runner.go:211] docker network inspect no-preload-154186 returned with exit code 1
	I1222 01:26:02.613860 1661698 network_create.go:287] error running [docker network inspect no-preload-154186]: docker network inspect no-preload-154186: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-154186 not found
	I1222 01:26:02.613874 1661698 network_create.go:289] output of [docker network inspect no-preload-154186]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-154186 not found
	
	** /stderr **
	I1222 01:26:02.613972 1661698 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 01:26:02.642205 1661698 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-4235b17e4b35 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:fe:6e:e2:e5:a2:3e} reservation:<nil>}
	I1222 01:26:02.642560 1661698 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-f8ef64c3e2a9 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:e2:c4:2a:ab:ba:65} reservation:<nil>}
	I1222 01:26:02.642889 1661698 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-abf79bf53636 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:c2:97:f6:b2:21:18} reservation:<nil>}
	I1222 01:26:02.643301 1661698 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-b31f3eb4f95f IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:aa:dc:a5:9c:a6:07} reservation:<nil>}
	I1222 01:26:02.643877 1661698 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001baa0d0}
	I1222 01:26:02.643908 1661698 network_create.go:124] attempt to create docker network no-preload-154186 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1222 01:26:02.643979 1661698 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-154186 no-preload-154186
	I1222 01:26:02.739896 1661698 network_create.go:108] docker network no-preload-154186 192.168.85.0/24 created
	I1222 01:26:02.739926 1661698 kic.go:121] calculated static IP "192.168.85.2" for the "no-preload-154186" container
	I1222 01:26:02.740019 1661698 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1222 01:26:02.785684 1661698 cli_runner.go:164] Run: docker volume create no-preload-154186 --label name.minikube.sigs.k8s.io=no-preload-154186 --label created_by.minikube.sigs.k8s.io=true
	I1222 01:26:02.846259 1661698 oci.go:103] Successfully created a docker volume no-preload-154186
	I1222 01:26:02.846348 1661698 cli_runner.go:164] Run: docker run --rm --name no-preload-154186-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-154186 --entrypoint /usr/bin/test -v no-preload-154186:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -d /var/lib
	I1222 01:26:02.853971 1661698 cache.go:162] opening:  /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0
	I1222 01:26:02.870725 1661698 cache.go:162] opening:  /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1
	I1222 01:26:02.881445 1661698 cache.go:162] opening:  /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1
	I1222 01:26:02.904757 1661698 cache.go:162] opening:  /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1222 01:26:02.938194 1661698 cache.go:162] opening:  /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1
	I1222 01:26:02.991748 1661698 cache.go:162] opening:  /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1
	I1222 01:26:03.371161 1661698 cache.go:157] /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 exists
	I1222 01:26:03.371190 1661698 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1" took 862.929816ms
	I1222 01:26:03.371216 1661698 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 succeeded
	I1222 01:26:03.742143 1661698 oci.go:107] Successfully prepared a docker volume no-preload-154186
	I1222 01:26:03.742232 1661698 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	W1222 01:26:03.742406 1661698 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1222 01:26:03.742571 1661698 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1222 01:26:03.863556 1661698 cache.go:157] /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 exists
	I1222 01:26:03.863597 1661698 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1" took 1.355664369s
	I1222 01:26:03.863611 1661698 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 succeeded
	I1222 01:26:03.908983 1661698 cache.go:157] /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1222 01:26:03.909019 1661698 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 1.399975359s
	I1222 01:26:03.909032 1661698 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1222 01:26:03.920540 1661698 cache.go:157] /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 exists
	I1222 01:26:03.920575 1661698 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1" took 1.412978829s
	I1222 01:26:03.920589 1661698 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 succeeded
	I1222 01:26:03.926671 1661698 cache.go:157] /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 exists
	I1222 01:26:03.926699 1661698 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0" took 1.417967725s
	I1222 01:26:03.926711 1661698 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1222 01:26:03.953549 1661698 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-154186 --name no-preload-154186 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-154186 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-154186 --network no-preload-154186 --ip 192.168.85.2 --volume no-preload-154186:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5
	I1222 01:26:04.050353 1661698 cache.go:157] /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 exists
	I1222 01:26:04.050395 1661698 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1" took 1.543229888s
	I1222 01:26:04.050433 1661698 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 succeeded
	I1222 01:26:04.050474 1661698 cache.go:87] Successfully saved all images to host disk.
	I1222 01:26:04.488691 1661698 cli_runner.go:164] Run: docker container inspect no-preload-154186 --format={{.State.Running}}
	I1222 01:26:04.519002 1661698 cli_runner.go:164] Run: docker container inspect no-preload-154186 --format={{.State.Status}}
	I1222 01:26:04.539814 1661698 cli_runner.go:164] Run: docker exec no-preload-154186 stat /var/lib/dpkg/alternatives/iptables
	I1222 01:26:04.608156 1661698 oci.go:144] the created container "no-preload-154186" has a running status.
	I1222 01:26:04.608186 1661698 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/no-preload-154186/id_rsa...
	I1222 01:26:04.679593 1661698 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/no-preload-154186/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1222 01:26:04.703079 1661698 cli_runner.go:164] Run: docker container inspect no-preload-154186 --format={{.State.Status}}
	I1222 01:26:04.731183 1661698 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1222 01:26:04.731211 1661698 kic_runner.go:114] Args: [docker exec --privileged no-preload-154186 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1222 01:26:04.801425 1661698 cli_runner.go:164] Run: docker container inspect no-preload-154186 --format={{.State.Status}}
	I1222 01:26:04.826707 1661698 machine.go:94] provisionDockerMachine start ...
	I1222 01:26:04.826848 1661698 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-154186
	I1222 01:26:04.855586 1661698 main.go:144] libmachine: Using SSH client type: native
	I1222 01:26:04.856038 1661698 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38685 <nil> <nil>}
	I1222 01:26:04.856053 1661698 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 01:26:04.856826 1661698 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1222 01:26:08.021285 1661698 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-154186
	
	I1222 01:26:08.021331 1661698 ubuntu.go:182] provisioning hostname "no-preload-154186"
	I1222 01:26:08.021464 1661698 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-154186
	I1222 01:26:08.045702 1661698 main.go:144] libmachine: Using SSH client type: native
	I1222 01:26:08.046098 1661698 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38685 <nil> <nil>}
	I1222 01:26:08.046126 1661698 main.go:144] libmachine: About to run SSH command:
	sudo hostname no-preload-154186 && echo "no-preload-154186" | sudo tee /etc/hostname
	I1222 01:26:08.213308 1661698 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-154186
	
	I1222 01:26:08.213436 1661698 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-154186
	I1222 01:26:08.237829 1661698 main.go:144] libmachine: Using SSH client type: native
	I1222 01:26:08.238257 1661698 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38685 <nil> <nil>}
	I1222 01:26:08.238283 1661698 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-154186' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-154186/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-154186' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1222 01:26:08.387238 1661698 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 01:26:08.387313 1661698 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22179-1395000/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-1395000/.minikube}
	I1222 01:26:08.387391 1661698 ubuntu.go:190] setting up certificates
	I1222 01:26:08.387433 1661698 provision.go:84] configureAuth start
	I1222 01:26:08.387528 1661698 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-154186
	I1222 01:26:08.407457 1661698 provision.go:143] copyHostCerts
	I1222 01:26:08.407545 1661698 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1395000/.minikube/key.pem, removing ...
	I1222 01:26:08.407558 1661698 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1395000/.minikube/key.pem
	I1222 01:26:08.407648 1661698 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-1395000/.minikube/key.pem (1679 bytes)
	I1222 01:26:08.407746 1661698 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.pem, removing ...
	I1222 01:26:08.407755 1661698 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.pem
	I1222 01:26:08.407782 1661698 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.pem (1082 bytes)
	I1222 01:26:08.407844 1661698 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1395000/.minikube/cert.pem, removing ...
	I1222 01:26:08.407854 1661698 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1395000/.minikube/cert.pem
	I1222 01:26:08.407879 1661698 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-1395000/.minikube/cert.pem (1123 bytes)
	I1222 01:26:08.407941 1661698 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca-key.pem org=jenkins.no-preload-154186 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-154186]
	I1222 01:26:08.614778 1661698 provision.go:177] copyRemoteCerts
	I1222 01:26:08.614880 1661698 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1222 01:26:08.614932 1661698 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-154186
	I1222 01:26:08.651510 1661698 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38685 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/no-preload-154186/id_rsa Username:docker}
	I1222 01:26:08.758418 1661698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1222 01:26:08.777616 1661698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1222 01:26:08.797113 1661698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1222 01:26:08.823855 1661698 provision.go:87] duration metric: took 436.378837ms to configureAuth
	I1222 01:26:08.823891 1661698 ubuntu.go:206] setting minikube options for container-runtime
	I1222 01:26:08.824134 1661698 config.go:182] Loaded profile config "no-preload-154186": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1222 01:26:08.824151 1661698 machine.go:97] duration metric: took 3.997390612s to provisionDockerMachine
	I1222 01:26:08.824160 1661698 client.go:176] duration metric: took 6.26392243s to LocalClient.Create
	I1222 01:26:08.824179 1661698 start.go:167] duration metric: took 6.26400216s to libmachine.API.Create "no-preload-154186"
	I1222 01:26:08.824194 1661698 start.go:293] postStartSetup for "no-preload-154186" (driver="docker")
	I1222 01:26:08.824205 1661698 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1222 01:26:08.824279 1661698 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1222 01:26:08.824335 1661698 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-154186
	I1222 01:26:08.854242 1661698 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38685 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/no-preload-154186/id_rsa Username:docker}
	I1222 01:26:08.972151 1661698 ssh_runner.go:195] Run: cat /etc/os-release
	I1222 01:26:08.976375 1661698 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1222 01:26:08.976410 1661698 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1222 01:26:08.976425 1661698 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1395000/.minikube/addons for local assets ...
	I1222 01:26:08.976490 1661698 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1395000/.minikube/files for local assets ...
	I1222 01:26:08.976571 1661698 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem -> 13968642.pem in /etc/ssl/certs
	I1222 01:26:08.976692 1661698 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1222 01:26:08.985855 1661698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem --> /etc/ssl/certs/13968642.pem (1708 bytes)
	I1222 01:26:09.008337 1661698 start.go:296] duration metric: took 184.107863ms for postStartSetup
	I1222 01:26:09.008834 1661698 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-154186
	I1222 01:26:09.029515 1661698 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/no-preload-154186/config.json ...
	I1222 01:26:09.029807 1661698 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 01:26:09.029879 1661698 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-154186
	I1222 01:26:09.066348 1661698 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38685 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/no-preload-154186/id_rsa Username:docker}
	I1222 01:26:09.171526 1661698 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1222 01:26:09.177357 1661698 start.go:128] duration metric: took 6.621167885s to createHost
	I1222 01:26:09.177380 1661698 start.go:83] releasing machines lock for "no-preload-154186", held for 6.62141748s
	I1222 01:26:09.177449 1661698 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-154186
	I1222 01:26:09.200044 1661698 ssh_runner.go:195] Run: cat /version.json
	I1222 01:26:09.200075 1661698 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1222 01:26:09.200113 1661698 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-154186
	I1222 01:26:09.200143 1661698 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-154186
	I1222 01:26:09.231727 1661698 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38685 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/no-preload-154186/id_rsa Username:docker}
	I1222 01:26:09.234008 1661698 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38685 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/no-preload-154186/id_rsa Username:docker}
	I1222 01:26:09.426715 1661698 ssh_runner.go:195] Run: systemctl --version
	I1222 01:26:09.434725 1661698 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1222 01:26:09.440117 1661698 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1222 01:26:09.440291 1661698 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1222 01:26:09.472787 1661698 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1222 01:26:09.472862 1661698 start.go:496] detecting cgroup driver to use...
	I1222 01:26:09.472910 1661698 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 01:26:09.472995 1661698 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1222 01:26:09.493625 1661698 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1222 01:26:09.510154 1661698 docker.go:218] disabling cri-docker service (if available) ...
	I1222 01:26:09.510223 1661698 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1222 01:26:09.530696 1661698 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1222 01:26:09.551907 1661698 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1222 01:26:09.707582 1661698 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1222 01:26:09.876343 1661698 docker.go:234] disabling docker service ...
	I1222 01:26:09.876496 1661698 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1222 01:26:09.911610 1661698 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1222 01:26:09.927537 1661698 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1222 01:26:10.099574 1661698 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1222 01:26:10.256769 1661698 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1222 01:26:10.273063 1661698 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 01:26:10.289367 1661698 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1222 01:26:10.299371 1661698 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1222 01:26:10.309331 1661698 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1222 01:26:10.309455 1661698 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1222 01:26:10.320102 1661698 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1222 01:26:10.329653 1661698 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1222 01:26:10.339349 1661698 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1222 01:26:10.349068 1661698 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1222 01:26:10.358143 1661698 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1222 01:26:10.368107 1661698 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1222 01:26:10.377997 1661698 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1222 01:26:10.387999 1661698 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1222 01:26:10.397252 1661698 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1222 01:26:10.405731 1661698 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:26:10.596247 1661698 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1222 01:26:10.739442 1661698 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1222 01:26:10.739597 1661698 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1222 01:26:10.746777 1661698 start.go:564] Will wait 60s for crictl version
	I1222 01:26:10.746866 1661698 ssh_runner.go:195] Run: which crictl
	I1222 01:26:10.753484 1661698 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1222 01:26:10.797425 1661698 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.1
	RuntimeApiVersion:  v1
	I1222 01:26:10.797576 1661698 ssh_runner.go:195] Run: containerd --version
	I1222 01:26:10.829128 1661698 ssh_runner.go:195] Run: containerd --version
	I1222 01:26:10.878561 1661698 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.1 ...
	I1222 01:26:10.881708 1661698 cli_runner.go:164] Run: docker network inspect no-preload-154186 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 01:26:10.907895 1661698 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1222 01:26:10.913483 1661698 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 01:26:10.929117 1661698 kubeadm.go:884] updating cluster {Name:no-preload-154186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-154186 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1222 01:26:10.929245 1661698 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1222 01:26:10.929312 1661698 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 01:26:10.973103 1661698 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-rc.1". assuming images are not preloaded.
	I1222 01:26:10.973126 1661698 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-rc.1 registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 registry.k8s.io/kube-scheduler:v1.35.0-rc.1 registry.k8s.io/kube-proxy:v1.35.0-rc.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.6-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1222 01:26:10.973195 1661698 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1222 01:26:10.973405 1661698 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1222 01:26:10.973501 1661698 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1222 01:26:10.973591 1661698 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1222 01:26:10.973671 1661698 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1222 01:26:10.973745 1661698 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1222 01:26:10.973847 1661698 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.6-0
	I1222 01:26:10.973923 1661698 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1222 01:26:10.977172 1661698 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1222 01:26:10.977459 1661698 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1222 01:26:10.977600 1661698 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1222 01:26:10.977725 1661698 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1222 01:26:10.977858 1661698 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1222 01:26:10.978228 1661698 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1222 01:26:10.978415 1661698 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.6-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.6-0
	I1222 01:26:10.978626 1661698 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1222 01:26:11.224763 1661698 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.13.1" and sha "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf"
	I1222 01:26:11.224843 1661698 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.13.1
	I1222 01:26:11.254344 1661698 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" and sha "a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a"
	I1222 01:26:11.254472 1661698 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1222 01:26:11.269496 1661698 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" and sha "abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde"
	I1222 01:26:11.269574 1661698 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1222 01:26:11.275086 1661698 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd"
	I1222 01:26:11.275266 1661698 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1222 01:26:11.276207 1661698 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.6-0" and sha "271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57"
	I1222 01:26:11.276307 1661698 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.6-0
	I1222 01:26:11.279824 1661698 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.35.0-rc.1" and sha "7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e"
	I1222 01:26:11.279942 1661698 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1222 01:26:11.304002 1661698 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" and sha "3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54"
	I1222 01:26:11.304076 1661698 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1222 01:26:11.346734 1661698 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf" in container runtime
	I1222 01:26:11.346781 1661698 cri.go:226] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1222 01:26:11.346831 1661698 ssh_runner.go:195] Run: which crictl
	I1222 01:26:11.348845 1661698 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" does not exist at hash "a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a" in container runtime
	I1222 01:26:11.348890 1661698 cri.go:226] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1222 01:26:11.348938 1661698 ssh_runner.go:195] Run: which crictl
	I1222 01:26:11.356107 1661698 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" does not exist at hash "abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde" in container runtime
	I1222 01:26:11.356153 1661698 cri.go:226] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1222 01:26:11.356205 1661698 ssh_runner.go:195] Run: which crictl
	I1222 01:26:11.424788 1661698 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1222 01:26:11.424834 1661698 cri.go:226] Removing image: registry.k8s.io/pause:3.10.1
	I1222 01:26:11.424885 1661698 ssh_runner.go:195] Run: which crictl
	I1222 01:26:11.426547 1661698 cache_images.go:118] "registry.k8s.io/etcd:3.6.6-0" needs transfer: "registry.k8s.io/etcd:3.6.6-0" does not exist at hash "271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57" in container runtime
	I1222 01:26:11.426635 1661698 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-rc.1" does not exist at hash "7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e" in container runtime
	I1222 01:26:11.426663 1661698 cri.go:226] Removing image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1222 01:26:11.426718 1661698 ssh_runner.go:195] Run: which crictl
	I1222 01:26:11.426785 1661698 cri.go:226] Removing image: registry.k8s.io/etcd:3.6.6-0
	I1222 01:26:11.426837 1661698 ssh_runner.go:195] Run: which crictl
	I1222 01:26:11.432955 1661698 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" does not exist at hash "3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54" in container runtime
	I1222 01:26:11.432998 1661698 cri.go:226] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1222 01:26:11.433051 1661698 ssh_runner.go:195] Run: which crictl
	I1222 01:26:11.433133 1661698 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1222 01:26:11.433188 1661698 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1222 01:26:11.433247 1661698 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1222 01:26:11.439502 1661698 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1222 01:26:11.440780 1661698 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1222 01:26:11.453762 1661698 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1222 01:26:11.602909 1661698 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1222 01:26:11.602982 1661698 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1222 01:26:11.603045 1661698 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1222 01:26:11.603102 1661698 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1222 01:26:11.603161 1661698 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1222 01:26:11.603227 1661698 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1222 01:26:11.646773 1661698 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1222 01:26:11.799668 1661698 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1222 01:26:11.799749 1661698 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1222 01:26:11.799802 1661698 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1222 01:26:11.799856 1661698 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1222 01:26:11.799909 1661698 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1222 01:26:11.799974 1661698 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1222 01:26:11.913233 1661698 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1222 01:26:11.997868 1661698 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1
	I1222 01:26:11.998062 1661698 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1
	I1222 01:26:11.998230 1661698 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1222 01:26:11.998344 1661698 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1222 01:26:11.998464 1661698 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1
	I1222 01:26:11.998556 1661698 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1
	I1222 01:26:11.998673 1661698 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1222 01:26:11.998764 1661698 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1222 01:26:11.998857 1661698 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1222 01:26:11.998948 1661698 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1
	I1222 01:26:11.999052 1661698 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1
	I1222 01:26:12.036209 1661698 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0
	I1222 01:26:12.036446 1661698 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.6-0
	I1222 01:26:12.061549 1661698 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1': No such file or directory
	I1222 01:26:12.061645 1661698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1 (20682752 bytes)
	I1222 01:26:12.061754 1661698 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-rc.1': No such file or directory
	I1222 01:26:12.061800 1661698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1 (22434816 bytes)
	I1222 01:26:12.061884 1661698 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1222 01:26:12.061914 1661698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1222 01:26:12.061993 1661698 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1': No such file or directory
	I1222 01:26:12.062037 1661698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1 (15416320 bytes)
	I1222 01:26:12.062139 1661698 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1
	I1222 01:26:12.062267 1661698 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1
	I1222 01:26:12.062350 1661698 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1222 01:26:12.062382 1661698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (21178368 bytes)
	I1222 01:26:12.062460 1661698 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.6-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.6-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.6-0': No such file or directory
	I1222 01:26:12.062512 1661698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 --> /var/lib/minikube/images/etcd_3.6.6-0 (21761024 bytes)
	I1222 01:26:12.132673 1661698 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1': No such file or directory
	I1222 01:26:12.132764 1661698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1 (24702976 bytes)
	W1222 01:26:12.163386 1661698 image.go:328] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1222 01:26:12.163579 1661698 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51"
	I1222 01:26:12.163668 1661698 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1222 01:26:12.232094 1661698 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1222 01:26:12.232234 1661698 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	I1222 01:26:12.388563 1661698 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1222 01:26:12.388789 1661698 cri.go:226] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1222 01:26:12.388896 1661698 ssh_runner.go:195] Run: which crictl
	I1222 01:26:12.638728 1661698 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1222 01:26:12.638789 1661698 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1222 01:26:12.717584 1661698 containerd.go:285] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1
	I1222 01:26:12.717660 1661698 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1
	I1222 01:26:12.820049 1661698 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1222 01:26:14.461251 1661698 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.641133829s)
	I1222 01:26:14.461323 1661698 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1222 01:26:14.461220 1661698 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1: (1.743527016s)
	I1222 01:26:14.461441 1661698 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 from cache
	I1222 01:26:14.461477 1661698 containerd.go:285] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1
	I1222 01:26:14.461546 1661698 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1
	I1222 01:26:15.797576 1661698 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.336229987s)
	I1222 01:26:15.797632 1661698 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1222 01:26:15.797726 1661698 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1222 01:26:15.799295 1661698 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1: (1.337708065s)
	I1222 01:26:15.799324 1661698 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 from cache
	I1222 01:26:15.799344 1661698 containerd.go:285] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1222 01:26:15.799402 1661698 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1
	I1222 01:26:15.807560 1661698 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1222 01:26:15.807606 1661698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1222 01:26:17.344433 1661698 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1: (1.545003618s)
	I1222 01:26:17.344458 1661698 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1222 01:26:17.344476 1661698 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.6-0
	I1222 01:26:17.344527 1661698 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.6-0
	I1222 01:26:19.382146 1661698 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.6-0: (2.037532212s)
	I1222 01:26:19.382172 1661698 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 from cache
	I1222 01:26:19.382190 1661698 containerd.go:285] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1
	I1222 01:26:19.382241 1661698 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1
	I1222 01:26:21.058111 1661698 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1: (1.675847848s)
	I1222 01:26:21.058134 1661698 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 from cache
	I1222 01:26:21.058152 1661698 containerd.go:285] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1
	I1222 01:26:21.058205 1661698 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1
	I1222 01:26:22.704858 1661698 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1: (1.646630186s)
	I1222 01:26:22.704940 1661698 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 from cache
	I1222 01:26:22.704984 1661698 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1222 01:26:22.705062 1661698 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1222 01:26:23.250182 1661698 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1222 01:26:23.250226 1661698 cache_images.go:125] Successfully loaded all cached images
	I1222 01:26:23.250233 1661698 cache_images.go:94] duration metric: took 12.277093873s to LoadCachedImages
	I1222 01:26:23.250247 1661698 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-rc.1 containerd true true} ...
	I1222 01:26:23.250353 1661698 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-154186 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-154186 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1222 01:26:23.250430 1661698 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
	I1222 01:26:23.296241 1661698 cni.go:84] Creating CNI manager for ""
	I1222 01:26:23.296272 1661698 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1222 01:26:23.296285 1661698 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1222 01:26:23.296309 1661698 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-154186 NodeName:no-preload-154186 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1222 01:26:23.296437 1661698 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-154186"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1222 01:26:23.296512 1661698 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1222 01:26:23.307276 1661698 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-rc.1': No such file or directory
	
	Initiating transfer...
	I1222 01:26:23.307349 1661698 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-rc.1
	I1222 01:26:23.316532 1661698 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubectl.sha256
	I1222 01:26:23.316636 1661698 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl
	I1222 01:26:23.317337 1661698 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/linux/arm64/v1.35.0-rc.1/kubeadm
	I1222 01:26:23.317782 1661698 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/linux/arm64/v1.35.0-rc.1/kubelet
	I1222 01:26:23.324127 1661698 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-rc.1/kubectl': No such file or directory
	I1222 01:26:23.324170 1661698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/linux/arm64/v1.35.0-rc.1/kubectl --> /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl (55247032 bytes)
	I1222 01:26:24.422414 1661698 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 01:26:24.458554 1661698 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet
	I1222 01:26:24.468936 1661698 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet': No such file or directory
	I1222 01:26:24.469037 1661698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/linux/arm64/v1.35.0-rc.1/kubelet --> /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet (54329636 bytes)
	I1222 01:26:24.688947 1661698 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm
	I1222 01:26:24.703642 1661698 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm': No such file or directory
	I1222 01:26:24.703736 1661698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/linux/arm64/v1.35.0-rc.1/kubeadm --> /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm (68354232 bytes)
	I1222 01:26:25.264712 1661698 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1222 01:26:25.278066 1661698 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1222 01:26:25.295507 1661698 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1222 01:26:25.312004 1661698 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1222 01:26:25.339079 1661698 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1222 01:26:25.344813 1661698 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 01:26:25.367591 1661698 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:26:25.544254 1661698 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 01:26:25.564017 1661698 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/no-preload-154186 for IP: 192.168.85.2
	I1222 01:26:25.564043 1661698 certs.go:195] generating shared ca certs ...
	I1222 01:26:25.564061 1661698 certs.go:227] acquiring lock for ca certs: {Name:mk4e1172e73c8d9b926824a39d7e920772302ed7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:26:25.564202 1661698 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.key
	I1222 01:26:25.564265 1661698 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.key
	I1222 01:26:25.564276 1661698 certs.go:257] generating profile certs ...
	I1222 01:26:25.564344 1661698 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/no-preload-154186/client.key
	I1222 01:26:25.564360 1661698 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/no-preload-154186/client.crt with IP's: []
	I1222 01:26:25.826644 1661698 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/no-preload-154186/client.crt ...
	I1222 01:26:25.826682 1661698 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/no-preload-154186/client.crt: {Name:mk7abc9cd015505bbb4612461905a35e217d8da0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:26:25.826914 1661698 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/no-preload-154186/client.key ...
	I1222 01:26:25.826931 1661698 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/no-preload-154186/client.key: {Name:mk1cd891504191c481f7c49b753fa7ebf7ab098d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:26:25.827043 1661698 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/no-preload-154186/apiserver.key.e54c24a5
	I1222 01:26:25.827064 1661698 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/no-preload-154186/apiserver.crt.e54c24a5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1222 01:26:26.294438 1661698 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/no-preload-154186/apiserver.crt.e54c24a5 ...
	I1222 01:26:26.294471 1661698 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/no-preload-154186/apiserver.crt.e54c24a5: {Name:mke8e3564aa2daadd0010da19a87c1cdae3fb3dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:26:26.294660 1661698 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/no-preload-154186/apiserver.key.e54c24a5 ...
	I1222 01:26:26.294677 1661698 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/no-preload-154186/apiserver.key.e54c24a5: {Name:mk70005dbd9a3fe0ab9df8a6146d4a5e9ef53683 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:26:26.294761 1661698 certs.go:382] copying /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/no-preload-154186/apiserver.crt.e54c24a5 -> /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/no-preload-154186/apiserver.crt
	I1222 01:26:26.294848 1661698 certs.go:386] copying /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/no-preload-154186/apiserver.key.e54c24a5 -> /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/no-preload-154186/apiserver.key
	I1222 01:26:26.294915 1661698 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/no-preload-154186/proxy-client.key
	I1222 01:26:26.294935 1661698 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/no-preload-154186/proxy-client.crt with IP's: []
	I1222 01:26:26.514997 1661698 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/no-preload-154186/proxy-client.crt ...
	I1222 01:26:26.515028 1661698 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/no-preload-154186/proxy-client.crt: {Name:mk0b76afeeb5aaec64b767bb3bc287de02f6eb1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:26:26.515391 1661698 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/no-preload-154186/proxy-client.key ...
	I1222 01:26:26.515416 1661698 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/no-preload-154186/proxy-client.key: {Name:mk038d898f94846e196ac38134e246592bcc1c34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:26:26.515645 1661698 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/1396864.pem (1338 bytes)
	W1222 01:26:26.515696 1661698 certs.go:480] ignoring /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/1396864_empty.pem, impossibly tiny 0 bytes
	I1222 01:26:26.515710 1661698 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca-key.pem (1675 bytes)
	I1222 01:26:26.515739 1661698 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem (1082 bytes)
	I1222 01:26:26.515767 1661698 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem (1123 bytes)
	I1222 01:26:26.515797 1661698 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/key.pem (1679 bytes)
	I1222 01:26:26.515844 1661698 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem (1708 bytes)
	I1222 01:26:26.516396 1661698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1222 01:26:26.539418 1661698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1222 01:26:26.563603 1661698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1222 01:26:26.587981 1661698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1222 01:26:26.614301 1661698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/no-preload-154186/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1222 01:26:26.636581 1661698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/no-preload-154186/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1222 01:26:26.661277 1661698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/no-preload-154186/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1222 01:26:26.682180 1661698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/no-preload-154186/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1222 01:26:26.703719 1661698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1222 01:26:26.730024 1661698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/1396864.pem --> /usr/share/ca-certificates/1396864.pem (1338 bytes)
	I1222 01:26:26.751728 1661698 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem --> /usr/share/ca-certificates/13968642.pem (1708 bytes)
	I1222 01:26:26.775948 1661698 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1222 01:26:26.794066 1661698 ssh_runner.go:195] Run: openssl version
	I1222 01:26:26.803321 1661698 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/13968642.pem
	I1222 01:26:26.812559 1661698 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/13968642.pem /etc/ssl/certs/13968642.pem
	I1222 01:26:26.820982 1661698 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13968642.pem
	I1222 01:26:26.830621 1661698 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 00:13 /usr/share/ca-certificates/13968642.pem
	I1222 01:26:26.830708 1661698 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13968642.pem
	I1222 01:26:26.878381 1661698 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1222 01:26:26.890162 1661698 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/13968642.pem /etc/ssl/certs/3ec20f2e.0
	I1222 01:26:26.899392 1661698 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:26:26.908233 1661698 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1222 01:26:26.920681 1661698 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:26:26.929202 1661698 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 00:04 /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:26:26.929291 1661698 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:26:26.980731 1661698 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1222 01:26:26.989700 1661698 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1222 01:26:26.998152 1661698 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1396864.pem
	I1222 01:26:27.008556 1661698 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1396864.pem /etc/ssl/certs/1396864.pem
	I1222 01:26:27.018328 1661698 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1396864.pem
	I1222 01:26:27.023089 1661698 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 00:13 /usr/share/ca-certificates/1396864.pem
	I1222 01:26:27.023161 1661698 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1396864.pem
	I1222 01:26:27.066924 1661698 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1222 01:26:27.075509 1661698 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1396864.pem /etc/ssl/certs/51391683.0
	I1222 01:26:27.085285 1661698 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 01:26:27.091464 1661698 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1222 01:26:27.091522 1661698 kubeadm.go:401] StartCluster: {Name:no-preload-154186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-154186 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:26:27.091622 1661698 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1222 01:26:27.091694 1661698 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 01:26:27.143450 1661698 cri.go:96] found id: ""
	I1222 01:26:27.143552 1661698 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1222 01:26:27.159836 1661698 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1222 01:26:27.170312 1661698 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1222 01:26:27.170379 1661698 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 01:26:27.180803 1661698 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1222 01:26:27.180928 1661698 kubeadm.go:158] found existing configuration files:
	
	I1222 01:26:27.181029 1661698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1222 01:26:27.191569 1661698 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1222 01:26:27.191649 1661698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1222 01:26:27.201062 1661698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1222 01:26:27.213918 1661698 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1222 01:26:27.214010 1661698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 01:26:27.222878 1661698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1222 01:26:27.234973 1661698 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1222 01:26:27.235052 1661698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 01:26:27.246506 1661698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1222 01:26:27.259574 1661698 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1222 01:26:27.259654 1661698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 01:26:27.268491 1661698 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1222 01:26:27.323776 1661698 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1222 01:26:27.324276 1661698 kubeadm.go:319] [preflight] Running pre-flight checks
	I1222 01:26:27.437666 1661698 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1222 01:26:27.437747 1661698 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1222 01:26:27.437784 1661698 kubeadm.go:319] OS: Linux
	I1222 01:26:27.437832 1661698 kubeadm.go:319] CGROUPS_CPU: enabled
	I1222 01:26:27.437884 1661698 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1222 01:26:27.437934 1661698 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1222 01:26:27.437997 1661698 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1222 01:26:27.438056 1661698 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1222 01:26:27.438139 1661698 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1222 01:26:27.438189 1661698 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1222 01:26:27.438246 1661698 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1222 01:26:27.438295 1661698 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1222 01:26:27.525032 1661698 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1222 01:26:27.525147 1661698 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1222 01:26:27.525239 1661698 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1222 01:26:27.534584 1661698 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1222 01:26:27.537882 1661698 out.go:252]   - Generating certificates and keys ...
	I1222 01:26:27.538046 1661698 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1222 01:26:27.538213 1661698 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1222 01:26:28.026894 1661698 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1222 01:26:28.383674 1661698 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1222 01:26:28.445088 1661698 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1222 01:26:28.518832 1661698 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1222 01:26:28.943026 1661698 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1222 01:26:28.943408 1661698 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-154186] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1222 01:26:29.402735 1661698 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1222 01:26:29.403101 1661698 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-154186] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1222 01:26:29.653349 1661698 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1222 01:26:29.718406 1661698 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1222 01:26:30.257632 1661698 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1222 01:26:30.257975 1661698 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1222 01:26:30.374530 1661698 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1222 01:26:30.563318 1661698 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1222 01:26:30.880348 1661698 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1222 01:26:31.150687 1661698 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1222 01:26:31.204962 1661698 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1222 01:26:31.206140 1661698 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1222 01:26:31.220658 1661698 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1222 01:26:31.241278 1661698 out.go:252]   - Booting up control plane ...
	I1222 01:26:31.241423 1661698 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1222 01:26:31.241509 1661698 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1222 01:26:31.241604 1661698 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1222 01:26:31.284050 1661698 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1222 01:26:31.284161 1661698 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1222 01:26:31.296035 1661698 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1222 01:26:31.296135 1661698 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1222 01:26:31.296178 1661698 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1222 01:26:31.526931 1661698 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1222 01:26:31.527063 1661698 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1222 01:30:31.528083 1661698 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001198283s
	I1222 01:30:31.528113 1661698 kubeadm.go:319] 
	I1222 01:30:31.528168 1661698 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1222 01:30:31.528204 1661698 kubeadm.go:319] 	- The kubelet is not running
	I1222 01:30:31.528304 1661698 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1222 01:30:31.528309 1661698 kubeadm.go:319] 
	I1222 01:30:31.528414 1661698 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1222 01:30:31.528445 1661698 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1222 01:30:31.528475 1661698 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1222 01:30:31.528479 1661698 kubeadm.go:319] 
	I1222 01:30:31.534468 1661698 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1222 01:30:31.534939 1661698 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1222 01:30:31.535063 1661698 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1222 01:30:31.535323 1661698 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1222 01:30:31.535332 1661698 kubeadm.go:319] 
	I1222 01:30:31.535406 1661698 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1222 01:30:31.535527 1661698 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-154186] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-154186] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001198283s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-154186] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-154186] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001198283s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1222 01:30:31.535610 1661698 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1222 01:30:31.944583 1661698 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 01:30:31.959295 1661698 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1222 01:30:31.959366 1661698 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 01:30:31.967786 1661698 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1222 01:30:31.967809 1661698 kubeadm.go:158] found existing configuration files:
	
	I1222 01:30:31.967875 1661698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1222 01:30:31.976572 1661698 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1222 01:30:31.976645 1661698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1222 01:30:31.984566 1661698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1222 01:30:31.995193 1661698 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1222 01:30:31.995287 1661698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 01:30:32.006676 1661698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1222 01:30:32.018592 1661698 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1222 01:30:32.018733 1661698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 01:30:32.028237 1661698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1222 01:30:32.043043 1661698 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1222 01:30:32.043172 1661698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 01:30:32.052235 1661698 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1222 01:30:32.094134 1661698 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1222 01:30:32.094474 1661698 kubeadm.go:319] [preflight] Running pre-flight checks
	I1222 01:30:32.174573 1661698 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1222 01:30:32.174734 1661698 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1222 01:30:32.174816 1661698 kubeadm.go:319] OS: Linux
	I1222 01:30:32.174901 1661698 kubeadm.go:319] CGROUPS_CPU: enabled
	I1222 01:30:32.174991 1661698 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1222 01:30:32.175073 1661698 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1222 01:30:32.175183 1661698 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1222 01:30:32.175273 1661698 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1222 01:30:32.175356 1661698 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1222 01:30:32.175408 1661698 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1222 01:30:32.175461 1661698 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1222 01:30:32.175510 1661698 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1222 01:30:32.244754 1661698 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1222 01:30:32.244977 1661698 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1222 01:30:32.245121 1661698 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1222 01:30:32.250598 1661698 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1222 01:30:32.253515 1661698 out.go:252]   - Generating certificates and keys ...
	I1222 01:30:32.253625 1661698 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1222 01:30:32.253721 1661698 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1222 01:30:32.253819 1661698 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1222 01:30:32.253899 1661698 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1222 01:30:32.253989 1661698 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1222 01:30:32.254107 1661698 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1222 01:30:32.254400 1661698 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1222 01:30:32.254506 1661698 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1222 01:30:32.254956 1661698 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1222 01:30:32.255237 1661698 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1222 01:30:32.255491 1661698 kubeadm.go:319] [certs] Using the existing "sa" key
	I1222 01:30:32.255552 1661698 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1222 01:30:32.402631 1661698 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1222 01:30:32.599258 1661698 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1222 01:30:33.036089 1661698 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1222 01:30:33.328680 1661698 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1222 01:30:33.401037 1661698 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1222 01:30:33.401569 1661698 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1222 01:30:33.404184 1661698 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1222 01:30:33.407507 1661698 out.go:252]   - Booting up control plane ...
	I1222 01:30:33.407615 1661698 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1222 01:30:33.407700 1661698 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1222 01:30:33.407772 1661698 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1222 01:30:33.430782 1661698 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1222 01:30:33.431319 1661698 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1222 01:30:33.439215 1661698 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1222 01:30:33.439558 1661698 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1222 01:30:33.439606 1661698 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1222 01:30:33.604011 1661698 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1222 01:30:33.604133 1661698 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1222 01:34:33.605118 1661698 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001202243s
	I1222 01:34:33.610423 1661698 kubeadm.go:319] 
	I1222 01:34:33.610510 1661698 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1222 01:34:33.610555 1661698 kubeadm.go:319] 	- The kubelet is not running
	I1222 01:34:33.610673 1661698 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1222 01:34:33.610685 1661698 kubeadm.go:319] 
	I1222 01:34:33.610798 1661698 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1222 01:34:33.610837 1661698 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1222 01:34:33.610875 1661698 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1222 01:34:33.610884 1661698 kubeadm.go:319] 
	I1222 01:34:33.611729 1661698 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1222 01:34:33.612160 1661698 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1222 01:34:33.612286 1661698 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1222 01:34:33.612615 1661698 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1222 01:34:33.612639 1661698 kubeadm.go:319] 
	I1222 01:34:33.612738 1661698 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1222 01:34:33.612826 1661698 kubeadm.go:403] duration metric: took 8m6.521308561s to StartCluster
	I1222 01:34:33.612869 1661698 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:34:33.612963 1661698 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:34:33.638994 1661698 cri.go:96] found id: ""
	I1222 01:34:33.639065 1661698 logs.go:282] 0 containers: []
	W1222 01:34:33.639100 1661698 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:34:33.639124 1661698 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:34:33.639214 1661698 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:34:33.664403 1661698 cri.go:96] found id: ""
	I1222 01:34:33.664427 1661698 logs.go:282] 0 containers: []
	W1222 01:34:33.664436 1661698 logs.go:284] No container was found matching "etcd"
	I1222 01:34:33.664446 1661698 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:34:33.664509 1661698 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:34:33.695712 1661698 cri.go:96] found id: ""
	I1222 01:34:33.695738 1661698 logs.go:282] 0 containers: []
	W1222 01:34:33.695748 1661698 logs.go:284] No container was found matching "coredns"
	I1222 01:34:33.695754 1661698 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:34:33.695824 1661698 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:34:33.725832 1661698 cri.go:96] found id: ""
	I1222 01:34:33.725860 1661698 logs.go:282] 0 containers: []
	W1222 01:34:33.725869 1661698 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:34:33.725877 1661698 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:34:33.725946 1661698 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:34:33.754495 1661698 cri.go:96] found id: ""
	I1222 01:34:33.754525 1661698 logs.go:282] 0 containers: []
	W1222 01:34:33.754545 1661698 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:34:33.754568 1661698 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:34:33.754673 1661698 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:34:33.781930 1661698 cri.go:96] found id: ""
	I1222 01:34:33.781958 1661698 logs.go:282] 0 containers: []
	W1222 01:34:33.781967 1661698 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:34:33.781974 1661698 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:34:33.782035 1661698 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:34:33.809335 1661698 cri.go:96] found id: ""
	I1222 01:34:33.809412 1661698 logs.go:282] 0 containers: []
	W1222 01:34:33.809435 1661698 logs.go:284] No container was found matching "kindnet"
	I1222 01:34:33.809461 1661698 logs.go:123] Gathering logs for kubelet ...
	I1222 01:34:33.809500 1661698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:34:33.870551 1661698 logs.go:123] Gathering logs for dmesg ...
	I1222 01:34:33.870590 1661698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:34:33.887403 1661698 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:34:33.887432 1661698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:34:33.956483 1661698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:34:33.946406    5431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:34:33.946913    5431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:34:33.948611    5431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:34:33.949424    5431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:34:33.951490    5431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:34:33.946406    5431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:34:33.946913    5431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:34:33.948611    5431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:34:33.949424    5431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:34:33.951490    5431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:34:33.956566 1661698 logs.go:123] Gathering logs for containerd ...
	I1222 01:34:33.956596 1661698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:34:34.001866 1661698 logs.go:123] Gathering logs for container status ...
	I1222 01:34:34.001912 1661698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 01:34:34.042014 1661698 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001202243s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1222 01:34:34.042151 1661698 out.go:285] * 
	* 
	W1222 01:34:34.042248 1661698 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001202243s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001202243s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1222 01:34:34.042299 1661698 out.go:285] * 
	* 
	W1222 01:34:34.044850 1661698 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1222 01:34:34.050486 1661698 out.go:203] 
	W1222 01:34:34.053486 1661698 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001202243s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001202243s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1222 01:34:34.053539 1661698 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1222 01:34:34.053562 1661698 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1222 01:34:34.056784 1661698 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-linux-arm64 start -p no-preload-154186 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1": exit status 109
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-154186
helpers_test.go:244: (dbg) docker inspect no-preload-154186:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "66e4ffc32ac4d9a593d1b758510a0770f54cbfe4fc8bd1c7d5d5625196ae8de7",
	        "Created": "2025-12-22T01:26:03.981102368Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1662042,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-22T01:26:04.182435984Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:065a636b8735485f57df2b02ed6532902f189a9c5dc304ae0ae68a778e1c9b2c",
	        "ResolvConfPath": "/var/lib/docker/containers/66e4ffc32ac4d9a593d1b758510a0770f54cbfe4fc8bd1c7d5d5625196ae8de7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/66e4ffc32ac4d9a593d1b758510a0770f54cbfe4fc8bd1c7d5d5625196ae8de7/hostname",
	        "HostsPath": "/var/lib/docker/containers/66e4ffc32ac4d9a593d1b758510a0770f54cbfe4fc8bd1c7d5d5625196ae8de7/hosts",
	        "LogPath": "/var/lib/docker/containers/66e4ffc32ac4d9a593d1b758510a0770f54cbfe4fc8bd1c7d5d5625196ae8de7/66e4ffc32ac4d9a593d1b758510a0770f54cbfe4fc8bd1c7d5d5625196ae8de7-json.log",
	        "Name": "/no-preload-154186",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-154186:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-154186",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "66e4ffc32ac4d9a593d1b758510a0770f54cbfe4fc8bd1c7d5d5625196ae8de7",
	                "LowerDir": "/var/lib/docker/overlay2/c8373d16a8e2d6ece6c54ed2fe561f951d426d306116bd1c2373e51241872490-init/diff:/var/lib/docker/overlay2/fdfc0dccbb3b6766b7981a5669907f62e120bbe774767ccbee34e6115374625f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c8373d16a8e2d6ece6c54ed2fe561f951d426d306116bd1c2373e51241872490/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c8373d16a8e2d6ece6c54ed2fe561f951d426d306116bd1c2373e51241872490/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c8373d16a8e2d6ece6c54ed2fe561f951d426d306116bd1c2373e51241872490/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-154186",
	                "Source": "/var/lib/docker/volumes/no-preload-154186/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-154186",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-154186",
	                "name.minikube.sigs.k8s.io": "no-preload-154186",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4c84f7a2a870d36246b7a801b7bf7055532e2138e424e145ab2b2ac49b81f1d2",
	            "SandboxKey": "/var/run/docker/netns/4c84f7a2a870",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38685"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38686"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38689"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38687"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38688"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-154186": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2e:f0:0c:13:3c:8d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "16496b4bf71c897a797bd98394f7b6821dd2bd1d65c81e9f1efd0fcd1f14d1a5",
	                    "EndpointID": "60d289b6dced5e2b95a24119998812f02b77a1cbd32a594dea6fb7ca62aa8c31",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-154186",
	                        "66e4ffc32ac4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-154186 -n no-preload-154186
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-154186 -n no-preload-154186: exit status 6 (366.029232ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1222 01:34:34.505340 1678178 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-154186" does not appear in /home/jenkins/minikube-integration/22179-1395000/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/FirstStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-154186 logs -n 25
helpers_test.go:261: TestStartStop/group/no-preload/serial/FirstStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───
──────────────────┐
	│ COMMAND │                                                                                                                           ARGS                                                                                                                           │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───
──────────────────┤
	│ addons  │ enable dashboard -p default-k8s-diff-port-778490 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ default-k8s-diff-port-778490 │ jenkins │ v1.37.0 │ 22 Dec 25 01:24 UTC │ 22 Dec 25 01:24 UTC │
	│ start   │ -p default-k8s-diff-port-778490 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-778490 │ jenkins │ v1.37.0 │ 22 Dec 25 01:24 UTC │ 22 Dec 25 01:25 UTC │
	│ image   │ old-k8s-version-433815 image list --format=json                                                                                                                                                                                                          │ old-k8s-version-433815       │ jenkins │ v1.37.0 │ 22 Dec 25 01:25 UTC │ 22 Dec 25 01:25 UTC │
	│ pause   │ -p old-k8s-version-433815 --alsologtostderr -v=1                                                                                                                                                                                                         │ old-k8s-version-433815       │ jenkins │ v1.37.0 │ 22 Dec 25 01:25 UTC │ 22 Dec 25 01:25 UTC │
	│ unpause │ -p old-k8s-version-433815 --alsologtostderr -v=1                                                                                                                                                                                                         │ old-k8s-version-433815       │ jenkins │ v1.37.0 │ 22 Dec 25 01:25 UTC │ 22 Dec 25 01:25 UTC │
	│ delete  │ -p old-k8s-version-433815                                                                                                                                                                                                                                │ old-k8s-version-433815       │ jenkins │ v1.37.0 │ 22 Dec 25 01:25 UTC │ 22 Dec 25 01:25 UTC │
	│ delete  │ -p old-k8s-version-433815                                                                                                                                                                                                                                │ old-k8s-version-433815       │ jenkins │ v1.37.0 │ 22 Dec 25 01:25 UTC │ 22 Dec 25 01:25 UTC │
	│ start   │ -p embed-certs-980842 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                                             │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:25 UTC │ 22 Dec 25 01:26 UTC │
	│ image   │ default-k8s-diff-port-778490 image list --format=json                                                                                                                                                                                                    │ default-k8s-diff-port-778490 │ jenkins │ v1.37.0 │ 22 Dec 25 01:25 UTC │ 22 Dec 25 01:25 UTC │
	│ pause   │ -p default-k8s-diff-port-778490 --alsologtostderr -v=1                                                                                                                                                                                                   │ default-k8s-diff-port-778490 │ jenkins │ v1.37.0 │ 22 Dec 25 01:25 UTC │ 22 Dec 25 01:25 UTC │
	│ unpause │ -p default-k8s-diff-port-778490 --alsologtostderr -v=1                                                                                                                                                                                                   │ default-k8s-diff-port-778490 │ jenkins │ v1.37.0 │ 22 Dec 25 01:25 UTC │ 22 Dec 25 01:25 UTC │
	│ delete  │ -p default-k8s-diff-port-778490                                                                                                                                                                                                                          │ default-k8s-diff-port-778490 │ jenkins │ v1.37.0 │ 22 Dec 25 01:25 UTC │ 22 Dec 25 01:26 UTC │
	│ delete  │ -p default-k8s-diff-port-778490                                                                                                                                                                                                                          │ default-k8s-diff-port-778490 │ jenkins │ v1.37.0 │ 22 Dec 25 01:26 UTC │ 22 Dec 25 01:26 UTC │
	│ delete  │ -p disable-driver-mounts-459348                                                                                                                                                                                                                          │ disable-driver-mounts-459348 │ jenkins │ v1.37.0 │ 22 Dec 25 01:26 UTC │ 22 Dec 25 01:26 UTC │
	│ start   │ -p no-preload-154186 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-154186            │ jenkins │ v1.37.0 │ 22 Dec 25 01:26 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-980842 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                 │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:27 UTC │ 22 Dec 25 01:27 UTC │
	│ stop    │ -p embed-certs-980842 --alsologtostderr -v=3                                                                                                                                                                                                             │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:27 UTC │ 22 Dec 25 01:27 UTC │
	│ addons  │ enable dashboard -p embed-certs-980842 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                            │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:27 UTC │ 22 Dec 25 01:27 UTC │
	│ start   │ -p embed-certs-980842 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                                             │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:27 UTC │ 22 Dec 25 01:28 UTC │
	│ image   │ embed-certs-980842 image list --format=json                                                                                                                                                                                                              │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:28 UTC │ 22 Dec 25 01:28 UTC │
	│ pause   │ -p embed-certs-980842 --alsologtostderr -v=1                                                                                                                                                                                                             │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:28 UTC │ 22 Dec 25 01:28 UTC │
	│ unpause │ -p embed-certs-980842 --alsologtostderr -v=1                                                                                                                                                                                                             │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:28 UTC │ 22 Dec 25 01:28 UTC │
	│ delete  │ -p embed-certs-980842                                                                                                                                                                                                                                    │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:28 UTC │ 22 Dec 25 01:28 UTC │
	│ delete  │ -p embed-certs-980842                                                                                                                                                                                                                                    │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:28 UTC │ 22 Dec 25 01:28 UTC │
	│ start   │ -p newest-cni-869293 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ newest-cni-869293            │ jenkins │ v1.37.0 │ 22 Dec 25 01:28 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───
──────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/22 01:28:29
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1222 01:28:29.517235 1670843 out.go:360] Setting OutFile to fd 1 ...
	I1222 01:28:29.517360 1670843 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:28:29.517375 1670843 out.go:374] Setting ErrFile to fd 2...
	I1222 01:28:29.517381 1670843 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:28:29.517635 1670843 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1395000/.minikube/bin
	I1222 01:28:29.518139 1670843 out.go:368] Setting JSON to false
	I1222 01:28:29.519021 1670843 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":115862,"bootTime":1766251047,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1222 01:28:29.519085 1670843 start.go:143] virtualization:  
	I1222 01:28:29.523165 1670843 out.go:179] * [newest-cni-869293] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1222 01:28:29.526534 1670843 out.go:179]   - MINIKUBE_LOCATION=22179
	I1222 01:28:29.526612 1670843 notify.go:221] Checking for updates...
	I1222 01:28:29.533896 1670843 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 01:28:29.537080 1670843 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-1395000/kubeconfig
	I1222 01:28:29.540168 1670843 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1395000/.minikube
	I1222 01:28:29.543253 1670843 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1222 01:28:29.546250 1670843 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 01:28:29.549849 1670843 config.go:182] Loaded profile config "no-preload-154186": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1222 01:28:29.549971 1670843 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 01:28:29.575293 1670843 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1222 01:28:29.575440 1670843 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:28:29.641901 1670843 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 01:28:29.632088848 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:28:29.642009 1670843 docker.go:319] overlay module found
	I1222 01:28:29.645181 1670843 out.go:179] * Using the docker driver based on user configuration
	I1222 01:28:29.648076 1670843 start.go:309] selected driver: docker
	I1222 01:28:29.648097 1670843 start.go:928] validating driver "docker" against <nil>
	I1222 01:28:29.648110 1670843 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 01:28:29.648868 1670843 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:28:29.705361 1670843 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 01:28:29.695866952 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:28:29.705519 1670843 start_flags.go:329] no existing cluster config was found, will generate one from the flags 
	W1222 01:28:29.705566 1670843 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1222 01:28:29.705783 1670843 start_flags.go:1014] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1222 01:28:29.708430 1670843 out.go:179] * Using Docker driver with root privileges
	I1222 01:28:29.711256 1670843 cni.go:84] Creating CNI manager for ""
	I1222 01:28:29.711321 1670843 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1222 01:28:29.711336 1670843 start_flags.go:338] Found "CNI" CNI - setting NetworkPlugin=cni
	I1222 01:28:29.711424 1670843 start.go:353] cluster config:
	{Name:newest-cni-869293 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-869293 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:28:29.714525 1670843 out.go:179] * Starting "newest-cni-869293" primary control-plane node in "newest-cni-869293" cluster
	I1222 01:28:29.717251 1670843 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1222 01:28:29.720207 1670843 out.go:179] * Pulling base image v0.0.48-1766219634-22260 ...
	I1222 01:28:29.723048 1670843 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1222 01:28:29.723095 1670843 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4
	I1222 01:28:29.723110 1670843 cache.go:65] Caching tarball of preloaded images
	I1222 01:28:29.723137 1670843 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1222 01:28:29.723197 1670843 preload.go:251] Found /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1222 01:28:29.723207 1670843 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on containerd
	I1222 01:28:29.723316 1670843 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/config.json ...
	I1222 01:28:29.723334 1670843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/config.json: {Name:mk7d2be4f8d5fd1ff0598339a0c1f4c8dc1289c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:28:29.752728 1670843 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon, skipping pull
	I1222 01:28:29.752750 1670843 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 exists in daemon, skipping load
	I1222 01:28:29.752765 1670843 cache.go:243] Successfully downloaded all kic artifacts
	I1222 01:28:29.752806 1670843 start.go:360] acquireMachinesLock for newest-cni-869293: {Name:mke3b6ee6da4cf7fd78c9e3f2e52f52ceafca24c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:28:29.752907 1670843 start.go:364] duration metric: took 85.72µs to acquireMachinesLock for "newest-cni-869293"
	I1222 01:28:29.752938 1670843 start.go:93] Provisioning new machine with config: &{Name:newest-cni-869293 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-869293 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1222 01:28:29.753004 1670843 start.go:125] createHost starting for "" (driver="docker")
	I1222 01:28:29.756462 1670843 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1222 01:28:29.756690 1670843 start.go:159] libmachine.API.Create for "newest-cni-869293" (driver="docker")
	I1222 01:28:29.756724 1670843 client.go:173] LocalClient.Create starting
	I1222 01:28:29.756801 1670843 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem
	I1222 01:28:29.756843 1670843 main.go:144] libmachine: Decoding PEM data...
	I1222 01:28:29.756861 1670843 main.go:144] libmachine: Parsing certificate...
	I1222 01:28:29.756915 1670843 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem
	I1222 01:28:29.756931 1670843 main.go:144] libmachine: Decoding PEM data...
	I1222 01:28:29.756942 1670843 main.go:144] libmachine: Parsing certificate...
	I1222 01:28:29.757315 1670843 cli_runner.go:164] Run: docker network inspect newest-cni-869293 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1222 01:28:29.774941 1670843 cli_runner.go:211] docker network inspect newest-cni-869293 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1222 01:28:29.775018 1670843 network_create.go:284] running [docker network inspect newest-cni-869293] to gather additional debugging logs...
	I1222 01:28:29.775034 1670843 cli_runner.go:164] Run: docker network inspect newest-cni-869293
	W1222 01:28:29.790350 1670843 cli_runner.go:211] docker network inspect newest-cni-869293 returned with exit code 1
	I1222 01:28:29.790377 1670843 network_create.go:287] error running [docker network inspect newest-cni-869293]: docker network inspect newest-cni-869293: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-869293 not found
	I1222 01:28:29.790389 1670843 network_create.go:289] output of [docker network inspect newest-cni-869293]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-869293 not found
	
	** /stderr **
	I1222 01:28:29.790489 1670843 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 01:28:29.811617 1670843 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-4235b17e4b35 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:fe:6e:e2:e5:a2:3e} reservation:<nil>}
	I1222 01:28:29.811973 1670843 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-f8ef64c3e2a9 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:e2:c4:2a:ab:ba:65} reservation:<nil>}
	I1222 01:28:29.812349 1670843 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-abf79bf53636 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:c2:97:f6:b2:21:18} reservation:<nil>}
	I1222 01:28:29.812832 1670843 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019c3f50}
	I1222 01:28:29.812860 1670843 network_create.go:124] attempt to create docker network newest-cni-869293 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1222 01:28:29.812925 1670843 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-869293 newest-cni-869293
	I1222 01:28:29.869740 1670843 network_create.go:108] docker network newest-cni-869293 192.168.76.0/24 created
	I1222 01:28:29.869775 1670843 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-869293" container
	I1222 01:28:29.869851 1670843 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1222 01:28:29.886155 1670843 cli_runner.go:164] Run: docker volume create newest-cni-869293 --label name.minikube.sigs.k8s.io=newest-cni-869293 --label created_by.minikube.sigs.k8s.io=true
	I1222 01:28:29.904602 1670843 oci.go:103] Successfully created a docker volume newest-cni-869293
	I1222 01:28:29.904703 1670843 cli_runner.go:164] Run: docker run --rm --name newest-cni-869293-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-869293 --entrypoint /usr/bin/test -v newest-cni-869293:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -d /var/lib
	I1222 01:28:30.526342 1670843 oci.go:107] Successfully prepared a docker volume newest-cni-869293
	I1222 01:28:30.526403 1670843 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1222 01:28:30.526413 1670843 kic.go:194] Starting extracting preloaded images to volume ...
	I1222 01:28:30.526485 1670843 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-869293:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -I lz4 -xf /preloaded.tar -C /extractDir
	I1222 01:28:35.482640 1670843 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-869293:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -I lz4 -xf /preloaded.tar -C /extractDir: (4.956113773s)
	I1222 01:28:35.482672 1670843 kic.go:203] duration metric: took 4.956255379s to extract preloaded images to volume ...
	W1222 01:28:35.482805 1670843 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1222 01:28:35.482926 1670843 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1222 01:28:35.547405 1670843 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-869293 --name newest-cni-869293 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-869293 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-869293 --network newest-cni-869293 --ip 192.168.76.2 --volume newest-cni-869293:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5
	I1222 01:28:35.858405 1670843 cli_runner.go:164] Run: docker container inspect newest-cni-869293 --format={{.State.Running}}
	I1222 01:28:35.884175 1670843 cli_runner.go:164] Run: docker container inspect newest-cni-869293 --format={{.State.Status}}
	I1222 01:28:35.909405 1670843 cli_runner.go:164] Run: docker exec newest-cni-869293 stat /var/lib/dpkg/alternatives/iptables
	I1222 01:28:35.959724 1670843 oci.go:144] the created container "newest-cni-869293" has a running status.
	I1222 01:28:35.959757 1670843 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/newest-cni-869293/id_rsa...
	I1222 01:28:36.206517 1670843 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/newest-cni-869293/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1222 01:28:36.231277 1670843 cli_runner.go:164] Run: docker container inspect newest-cni-869293 --format={{.State.Status}}
	I1222 01:28:36.257362 1670843 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1222 01:28:36.257407 1670843 kic_runner.go:114] Args: [docker exec --privileged newest-cni-869293 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1222 01:28:36.342107 1670843 cli_runner.go:164] Run: docker container inspect newest-cni-869293 --format={{.State.Status}}
	I1222 01:28:36.366444 1670843 machine.go:94] provisionDockerMachine start ...
	I1222 01:28:36.366556 1670843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:28:36.385946 1670843 main.go:144] libmachine: Using SSH client type: native
	I1222 01:28:36.386403 1670843 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38695 <nil> <nil>}
	I1222 01:28:36.386423 1670843 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 01:28:36.387088 1670843 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:39762->127.0.0.1:38695: read: connection reset by peer
	I1222 01:28:39.522339 1670843 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-869293
	
	I1222 01:28:39.522366 1670843 ubuntu.go:182] provisioning hostname "newest-cni-869293"
	I1222 01:28:39.522451 1670843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:28:39.547399 1670843 main.go:144] libmachine: Using SSH client type: native
	I1222 01:28:39.547774 1670843 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38695 <nil> <nil>}
	I1222 01:28:39.547786 1670843 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-869293 && echo "newest-cni-869293" | sudo tee /etc/hostname
	I1222 01:28:39.694424 1670843 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-869293
	
	I1222 01:28:39.694503 1670843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:28:39.714200 1670843 main.go:144] libmachine: Using SSH client type: native
	I1222 01:28:39.714526 1670843 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38695 <nil> <nil>}
	I1222 01:28:39.714551 1670843 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-869293' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-869293/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-869293' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1222 01:28:39.850447 1670843 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 01:28:39.850500 1670843 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22179-1395000/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-1395000/.minikube}
	I1222 01:28:39.850540 1670843 ubuntu.go:190] setting up certificates
	I1222 01:28:39.850552 1670843 provision.go:84] configureAuth start
	I1222 01:28:39.850620 1670843 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-869293
	I1222 01:28:39.867785 1670843 provision.go:143] copyHostCerts
	I1222 01:28:39.867858 1670843 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.pem, removing ...
	I1222 01:28:39.867874 1670843 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.pem
	I1222 01:28:39.867957 1670843 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.pem (1082 bytes)
	I1222 01:28:39.868053 1670843 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1395000/.minikube/cert.pem, removing ...
	I1222 01:28:39.868064 1670843 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1395000/.minikube/cert.pem
	I1222 01:28:39.868091 1670843 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-1395000/.minikube/cert.pem (1123 bytes)
	I1222 01:28:39.868150 1670843 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1395000/.minikube/key.pem, removing ...
	I1222 01:28:39.868160 1670843 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1395000/.minikube/key.pem
	I1222 01:28:39.868186 1670843 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-1395000/.minikube/key.pem (1679 bytes)
	I1222 01:28:39.868234 1670843 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca-key.pem org=jenkins.newest-cni-869293 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-869293]
	I1222 01:28:40.422763 1670843 provision.go:177] copyRemoteCerts
	I1222 01:28:40.422844 1670843 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1222 01:28:40.422895 1670843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:28:40.440513 1670843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38695 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/newest-cni-869293/id_rsa Username:docker}
	I1222 01:28:40.538074 1670843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1222 01:28:40.556414 1670843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1222 01:28:40.575373 1670843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1222 01:28:40.593558 1670843 provision.go:87] duration metric: took 742.986656ms to configureAuth
	I1222 01:28:40.593586 1670843 ubuntu.go:206] setting minikube options for container-runtime
	I1222 01:28:40.593787 1670843 config.go:182] Loaded profile config "newest-cni-869293": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1222 01:28:40.593803 1670843 machine.go:97] duration metric: took 4.22733559s to provisionDockerMachine
	I1222 01:28:40.593812 1670843 client.go:176] duration metric: took 10.837081515s to LocalClient.Create
	I1222 01:28:40.593832 1670843 start.go:167] duration metric: took 10.837143899s to libmachine.API.Create "newest-cni-869293"
	I1222 01:28:40.593852 1670843 start.go:293] postStartSetup for "newest-cni-869293" (driver="docker")
	I1222 01:28:40.593867 1670843 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1222 01:28:40.593917 1670843 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1222 01:28:40.593967 1670843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:28:40.610836 1670843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38695 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/newest-cni-869293/id_rsa Username:docker}
	I1222 01:28:40.706406 1670843 ssh_runner.go:195] Run: cat /etc/os-release
	I1222 01:28:40.710036 1670843 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1222 01:28:40.710063 1670843 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1222 01:28:40.710074 1670843 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1395000/.minikube/addons for local assets ...
	I1222 01:28:40.710147 1670843 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1395000/.minikube/files for local assets ...
	I1222 01:28:40.710227 1670843 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem -> 13968642.pem in /etc/ssl/certs
	I1222 01:28:40.710335 1670843 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1222 01:28:40.718359 1670843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem --> /etc/ssl/certs/13968642.pem (1708 bytes)
	I1222 01:28:40.738021 1670843 start.go:296] duration metric: took 144.14894ms for postStartSetup
	I1222 01:28:40.738502 1670843 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-869293
	I1222 01:28:40.756061 1670843 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/config.json ...
	I1222 01:28:40.756360 1670843 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 01:28:40.756410 1670843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:28:40.773896 1670843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38695 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/newest-cni-869293/id_rsa Username:docker}
	I1222 01:28:40.867396 1670843 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1222 01:28:40.872357 1670843 start.go:128] duration metric: took 11.119339666s to createHost
	I1222 01:28:40.872384 1670843 start.go:83] releasing machines lock for "newest-cni-869293", held for 11.11946774s
	I1222 01:28:40.872459 1670843 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-869293
	I1222 01:28:40.890060 1670843 ssh_runner.go:195] Run: cat /version.json
	I1222 01:28:40.890150 1670843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:28:40.890413 1670843 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1222 01:28:40.890480 1670843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:28:40.915709 1670843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38695 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/newest-cni-869293/id_rsa Username:docker}
	I1222 01:28:40.935746 1670843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38695 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/newest-cni-869293/id_rsa Username:docker}
	I1222 01:28:41.107089 1670843 ssh_runner.go:195] Run: systemctl --version
	I1222 01:28:41.113703 1670843 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1222 01:28:41.118148 1670843 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1222 01:28:41.118236 1670843 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1222 01:28:41.147883 1670843 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1222 01:28:41.147909 1670843 start.go:496] detecting cgroup driver to use...
	I1222 01:28:41.147968 1670843 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 01:28:41.148044 1670843 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1222 01:28:41.163654 1670843 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1222 01:28:41.177388 1670843 docker.go:218] disabling cri-docker service (if available) ...
	I1222 01:28:41.177464 1670843 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1222 01:28:41.195478 1670843 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1222 01:28:41.214648 1670843 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1222 01:28:41.335255 1670843 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1222 01:28:41.459740 1670843 docker.go:234] disabling docker service ...
	I1222 01:28:41.459818 1670843 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1222 01:28:41.481154 1670843 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1222 01:28:41.495828 1670843 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1222 01:28:41.616188 1670843 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1222 01:28:41.741458 1670843 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1222 01:28:41.755996 1670843 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 01:28:41.769875 1670843 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1222 01:28:41.778912 1670843 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1222 01:28:41.787547 1670843 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1222 01:28:41.787619 1670843 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1222 01:28:41.796129 1670843 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1222 01:28:41.804747 1670843 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1222 01:28:41.813988 1670843 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1222 01:28:41.823382 1670843 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1222 01:28:41.831512 1670843 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1222 01:28:41.840674 1670843 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1222 01:28:41.849663 1670843 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1222 01:28:41.859033 1670843 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1222 01:28:41.866669 1670843 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1222 01:28:41.874404 1670843 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:28:41.996020 1670843 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1222 01:28:42.147980 1670843 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1222 01:28:42.148078 1670843 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1222 01:28:42.153206 1670843 start.go:564] Will wait 60s for crictl version
	I1222 01:28:42.153310 1670843 ssh_runner.go:195] Run: which crictl
	I1222 01:28:42.158111 1670843 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1222 01:28:42.188961 1670843 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.1
	RuntimeApiVersion:  v1
	I1222 01:28:42.189058 1670843 ssh_runner.go:195] Run: containerd --version
	I1222 01:28:42.212951 1670843 ssh_runner.go:195] Run: containerd --version
	I1222 01:28:42.242832 1670843 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.1 ...
	I1222 01:28:42.245962 1670843 cli_runner.go:164] Run: docker network inspect newest-cni-869293 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 01:28:42.263401 1670843 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1222 01:28:42.267705 1670843 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 01:28:42.281779 1670843 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1222 01:28:42.284630 1670843 kubeadm.go:884] updating cluster {Name:newest-cni-869293 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-869293 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1222 01:28:42.284796 1670843 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1222 01:28:42.284882 1670843 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 01:28:42.315646 1670843 containerd.go:627] all images are preloaded for containerd runtime.
	I1222 01:28:42.315686 1670843 containerd.go:534] Images already preloaded, skipping extraction
	I1222 01:28:42.315760 1670843 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 01:28:42.343479 1670843 containerd.go:627] all images are preloaded for containerd runtime.
	I1222 01:28:42.343504 1670843 cache_images.go:86] Images are preloaded, skipping loading
	I1222 01:28:42.343513 1670843 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-rc.1 containerd true true} ...
	I1222 01:28:42.343653 1670843 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-869293 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-869293 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1222 01:28:42.343729 1670843 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
	I1222 01:28:42.368317 1670843 cni.go:84] Creating CNI manager for ""
	I1222 01:28:42.368388 1670843 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1222 01:28:42.368427 1670843 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1222 01:28:42.368461 1670843 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-869293 NodeName:newest-cni-869293 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1222 01:28:42.368600 1670843 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-869293"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1222 01:28:42.368678 1670843 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1222 01:28:42.376805 1670843 binaries.go:51] Found k8s binaries, skipping transfer
	I1222 01:28:42.376907 1670843 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1222 01:28:42.384913 1670843 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1222 01:28:42.398203 1670843 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1222 01:28:42.412066 1670843 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2233 bytes)
	I1222 01:28:42.425146 1670843 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1222 01:28:42.428711 1670843 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 01:28:42.438586 1670843 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:28:42.581997 1670843 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 01:28:42.598481 1670843 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293 for IP: 192.168.76.2
	I1222 01:28:42.598505 1670843 certs.go:195] generating shared ca certs ...
	I1222 01:28:42.598523 1670843 certs.go:227] acquiring lock for ca certs: {Name:mk4e1172e73c8d9b926824a39d7e920772302ed7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:28:42.598712 1670843 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.key
	I1222 01:28:42.598780 1670843 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.key
	I1222 01:28:42.598795 1670843 certs.go:257] generating profile certs ...
	I1222 01:28:42.598868 1670843 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/client.key
	I1222 01:28:42.598888 1670843 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/client.crt with IP's: []
	I1222 01:28:43.368024 1670843 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/client.crt ...
	I1222 01:28:43.368059 1670843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/client.crt: {Name:mkfc3a338fdb42add5491ce4694522898b79b83c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:28:43.368262 1670843 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/client.key ...
	I1222 01:28:43.368276 1670843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/client.key: {Name:mkea74dd50bc644b440bafb99fc54190912b7665 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:28:43.368378 1670843 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/apiserver.key.70db33ce
	I1222 01:28:43.368397 1670843 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/apiserver.crt.70db33ce with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1222 01:28:43.608821 1670843 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/apiserver.crt.70db33ce ...
	I1222 01:28:43.608852 1670843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/apiserver.crt.70db33ce: {Name:mk0db6b3e8c9bf7aff940b44fd05b130d9d585d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:28:43.609048 1670843 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/apiserver.key.70db33ce ...
	I1222 01:28:43.609064 1670843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/apiserver.key.70db33ce: {Name:mk9a3f763ae7146332940a9b4d9169402652e2d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:28:43.609162 1670843 certs.go:382] copying /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/apiserver.crt.70db33ce -> /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/apiserver.crt
	I1222 01:28:43.609244 1670843 certs.go:386] copying /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/apiserver.key.70db33ce -> /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/apiserver.key
	I1222 01:28:43.609298 1670843 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/proxy-client.key
	I1222 01:28:43.609318 1670843 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/proxy-client.crt with IP's: []
	I1222 01:28:43.826780 1670843 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/proxy-client.crt ...
	I1222 01:28:43.826812 1670843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/proxy-client.crt: {Name:mk2b8cedfe513097eb57f8b68379ebde37c90b21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:28:43.827666 1670843 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/proxy-client.key ...
	I1222 01:28:43.827685 1670843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/proxy-client.key: {Name:mk5be442e41d6696d708120ad1b125b0231d124b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:28:43.827919 1670843 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/1396864.pem (1338 bytes)
	W1222 01:28:43.827970 1670843 certs.go:480] ignoring /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/1396864_empty.pem, impossibly tiny 0 bytes
	I1222 01:28:43.827983 1670843 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca-key.pem (1675 bytes)
	I1222 01:28:43.828011 1670843 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem (1082 bytes)
	I1222 01:28:43.828040 1670843 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem (1123 bytes)
	I1222 01:28:43.828064 1670843 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/key.pem (1679 bytes)
	I1222 01:28:43.828110 1670843 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem (1708 bytes)
	I1222 01:28:43.828781 1670843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1222 01:28:43.849273 1670843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1222 01:28:43.869748 1670843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1222 01:28:43.888552 1670843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1222 01:28:43.907953 1670843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1222 01:28:43.926293 1670843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1222 01:28:43.944116 1670843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1222 01:28:43.961772 1670843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1222 01:28:43.979659 1670843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1222 01:28:44.001330 1670843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/1396864.pem --> /usr/share/ca-certificates/1396864.pem (1338 bytes)
	I1222 01:28:44.024142 1670843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem --> /usr/share/ca-certificates/13968642.pem (1708 bytes)
	I1222 01:28:44.045144 1670843 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1222 01:28:44.059777 1670843 ssh_runner.go:195] Run: openssl version
	I1222 01:28:44.066272 1670843 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:28:44.074265 1670843 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1222 01:28:44.081820 1670843 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:28:44.085681 1670843 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 00:04 /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:28:44.085753 1670843 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:28:44.127374 1670843 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1222 01:28:44.135158 1670843 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1222 01:28:44.143793 1670843 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1396864.pem
	I1222 01:28:44.151667 1670843 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1396864.pem /etc/ssl/certs/1396864.pem
	I1222 01:28:44.159424 1670843 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1396864.pem
	I1222 01:28:44.163331 1670843 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 00:13 /usr/share/ca-certificates/1396864.pem
	I1222 01:28:44.163415 1670843 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1396864.pem
	I1222 01:28:44.204528 1670843 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1222 01:28:44.212004 1670843 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1396864.pem /etc/ssl/certs/51391683.0
	I1222 01:28:44.219747 1670843 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/13968642.pem
	I1222 01:28:44.227544 1670843 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/13968642.pem /etc/ssl/certs/13968642.pem
	I1222 01:28:44.235151 1670843 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13968642.pem
	I1222 01:28:44.239441 1670843 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 00:13 /usr/share/ca-certificates/13968642.pem
	I1222 01:28:44.239512 1670843 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13968642.pem
	I1222 01:28:44.281186 1670843 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1222 01:28:44.288873 1670843 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/13968642.pem /etc/ssl/certs/3ec20f2e.0
	I1222 01:28:44.296839 1670843 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 01:28:44.300500 1670843 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1222 01:28:44.300563 1670843 kubeadm.go:401] StartCluster: {Name:newest-cni-869293 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-869293 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:28:44.300658 1670843 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1222 01:28:44.300723 1670843 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 01:28:44.328761 1670843 cri.go:96] found id: ""
	I1222 01:28:44.328834 1670843 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1222 01:28:44.336639 1670843 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1222 01:28:44.344992 1670843 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1222 01:28:44.345060 1670843 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 01:28:44.353083 1670843 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1222 01:28:44.353102 1670843 kubeadm.go:158] found existing configuration files:
	
	I1222 01:28:44.353165 1670843 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1222 01:28:44.361047 1670843 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1222 01:28:44.361129 1670843 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1222 01:28:44.368921 1670843 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1222 01:28:44.377002 1670843 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1222 01:28:44.377072 1670843 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 01:28:44.384992 1670843 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1222 01:28:44.393081 1670843 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1222 01:28:44.393153 1670843 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 01:28:44.400779 1670843 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1222 01:28:44.408653 1670843 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1222 01:28:44.408723 1670843 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 01:28:44.416474 1670843 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1222 01:28:44.452812 1670843 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1222 01:28:44.452877 1670843 kubeadm.go:319] [preflight] Running pre-flight checks
	I1222 01:28:44.531198 1670843 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1222 01:28:44.531307 1670843 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1222 01:28:44.531355 1670843 kubeadm.go:319] OS: Linux
	I1222 01:28:44.531413 1670843 kubeadm.go:319] CGROUPS_CPU: enabled
	I1222 01:28:44.531466 1670843 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1222 01:28:44.531524 1670843 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1222 01:28:44.531590 1670843 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1222 01:28:44.531650 1670843 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1222 01:28:44.531733 1670843 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1222 01:28:44.531797 1670843 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1222 01:28:44.531864 1670843 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1222 01:28:44.531927 1670843 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1222 01:28:44.604053 1670843 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1222 01:28:44.604174 1670843 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1222 01:28:44.604270 1670843 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1222 01:28:44.614524 1670843 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1222 01:28:44.617569 1670843 out.go:252]   - Generating certificates and keys ...
	I1222 01:28:44.617679 1670843 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1222 01:28:44.617781 1670843 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1222 01:28:44.891888 1670843 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1222 01:28:45.152805 1670843 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1222 01:28:45.274684 1670843 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1222 01:28:45.517117 1670843 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1222 01:28:45.648291 1670843 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1222 01:28:45.648639 1670843 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-869293] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1222 01:28:45.933782 1670843 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1222 01:28:45.933931 1670843 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-869293] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1222 01:28:46.072331 1670843 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1222 01:28:46.408818 1670843 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1222 01:28:46.502126 1670843 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1222 01:28:46.502542 1670843 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1222 01:28:46.995871 1670843 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1222 01:28:47.191545 1670843 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1222 01:28:47.264763 1670843 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1222 01:28:47.533721 1670843 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1222 01:28:47.788353 1670843 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1222 01:28:47.789301 1670843 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1222 01:28:47.796939 1670843 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1222 01:28:47.800747 1670843 out.go:252]   - Booting up control plane ...
	I1222 01:28:47.800866 1670843 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1222 01:28:47.807894 1670843 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1222 01:28:47.807975 1670843 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1222 01:28:47.826994 1670843 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1222 01:28:47.827460 1670843 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1222 01:28:47.835465 1670843 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1222 01:28:47.835861 1670843 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1222 01:28:47.836121 1670843 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1222 01:28:47.973178 1670843 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1222 01:28:47.973389 1670843 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1222 01:30:31.528083 1661698 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001198283s
	I1222 01:30:31.528113 1661698 kubeadm.go:319] 
	I1222 01:30:31.528168 1661698 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1222 01:30:31.528204 1661698 kubeadm.go:319] 	- The kubelet is not running
	I1222 01:30:31.528304 1661698 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1222 01:30:31.528309 1661698 kubeadm.go:319] 
	I1222 01:30:31.528414 1661698 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1222 01:30:31.528445 1661698 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1222 01:30:31.528475 1661698 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1222 01:30:31.528479 1661698 kubeadm.go:319] 
	I1222 01:30:31.534468 1661698 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1222 01:30:31.534939 1661698 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1222 01:30:31.535063 1661698 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1222 01:30:31.535323 1661698 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1222 01:30:31.535332 1661698 kubeadm.go:319] 
	I1222 01:30:31.535406 1661698 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1222 01:30:31.535527 1661698 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-154186] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-154186] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001198283s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1222 01:30:31.535610 1661698 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1222 01:30:31.944583 1661698 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 01:30:31.959295 1661698 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1222 01:30:31.959366 1661698 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 01:30:31.967786 1661698 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1222 01:30:31.967809 1661698 kubeadm.go:158] found existing configuration files:
	
	I1222 01:30:31.967875 1661698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1222 01:30:31.976572 1661698 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1222 01:30:31.976645 1661698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1222 01:30:31.984566 1661698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1222 01:30:31.995193 1661698 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1222 01:30:31.995287 1661698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 01:30:32.006676 1661698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1222 01:30:32.018592 1661698 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1222 01:30:32.018733 1661698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 01:30:32.028237 1661698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1222 01:30:32.043043 1661698 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1222 01:30:32.043172 1661698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 01:30:32.052235 1661698 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1222 01:30:32.094134 1661698 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1222 01:30:32.094474 1661698 kubeadm.go:319] [preflight] Running pre-flight checks
	I1222 01:30:32.174573 1661698 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1222 01:30:32.174734 1661698 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1222 01:30:32.174816 1661698 kubeadm.go:319] OS: Linux
	I1222 01:30:32.174901 1661698 kubeadm.go:319] CGROUPS_CPU: enabled
	I1222 01:30:32.174991 1661698 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1222 01:30:32.175073 1661698 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1222 01:30:32.175183 1661698 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1222 01:30:32.175273 1661698 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1222 01:30:32.175356 1661698 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1222 01:30:32.175408 1661698 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1222 01:30:32.175461 1661698 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1222 01:30:32.175510 1661698 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1222 01:30:32.244754 1661698 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1222 01:30:32.244977 1661698 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1222 01:30:32.245121 1661698 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1222 01:30:32.250598 1661698 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1222 01:30:32.253515 1661698 out.go:252]   - Generating certificates and keys ...
	I1222 01:30:32.253625 1661698 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1222 01:30:32.253721 1661698 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1222 01:30:32.253819 1661698 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1222 01:30:32.253899 1661698 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1222 01:30:32.253989 1661698 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1222 01:30:32.254107 1661698 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1222 01:30:32.254400 1661698 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1222 01:30:32.254506 1661698 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1222 01:30:32.254956 1661698 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1222 01:30:32.255237 1661698 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1222 01:30:32.255491 1661698 kubeadm.go:319] [certs] Using the existing "sa" key
	I1222 01:30:32.255552 1661698 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1222 01:30:32.402631 1661698 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1222 01:30:32.599258 1661698 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1222 01:30:33.036089 1661698 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1222 01:30:33.328680 1661698 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1222 01:30:33.401037 1661698 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1222 01:30:33.401569 1661698 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1222 01:30:33.404184 1661698 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1222 01:30:33.407507 1661698 out.go:252]   - Booting up control plane ...
	I1222 01:30:33.407615 1661698 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1222 01:30:33.407700 1661698 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1222 01:30:33.407772 1661698 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1222 01:30:33.430782 1661698 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1222 01:30:33.431319 1661698 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1222 01:30:33.439215 1661698 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1222 01:30:33.439558 1661698 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1222 01:30:33.439606 1661698 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1222 01:30:33.604011 1661698 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1222 01:30:33.604133 1661698 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1222 01:32:47.973116 1670843 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000283341s
	I1222 01:32:47.973157 1670843 kubeadm.go:319] 
	I1222 01:32:47.973324 1670843 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1222 01:32:47.973386 1670843 kubeadm.go:319] 	- The kubelet is not running
	I1222 01:32:47.973953 1670843 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1222 01:32:47.974135 1670843 kubeadm.go:319] 
	I1222 01:32:47.974333 1670843 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1222 01:32:47.974391 1670843 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1222 01:32:47.974447 1670843 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1222 01:32:47.974452 1670843 kubeadm.go:319] 
	I1222 01:32:47.979596 1670843 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1222 01:32:47.980069 1670843 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1222 01:32:47.980186 1670843 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1222 01:32:47.980594 1670843 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1222 01:32:47.980620 1670843 kubeadm.go:319] 
	I1222 01:32:47.980717 1670843 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1222 01:32:47.980850 1670843 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-869293] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-869293] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000283341s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1222 01:32:47.980937 1670843 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1222 01:32:48.391513 1670843 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 01:32:48.405347 1670843 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1222 01:32:48.405424 1670843 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 01:32:48.413621 1670843 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1222 01:32:48.413642 1670843 kubeadm.go:158] found existing configuration files:
	
	I1222 01:32:48.413694 1670843 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1222 01:32:48.421650 1670843 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1222 01:32:48.421714 1670843 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1222 01:32:48.429403 1670843 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1222 01:32:48.437071 1670843 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1222 01:32:48.437146 1670843 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 01:32:48.444785 1670843 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1222 01:32:48.452627 1670843 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1222 01:32:48.452694 1670843 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 01:32:48.460251 1670843 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1222 01:32:48.468521 1670843 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1222 01:32:48.468599 1670843 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 01:32:48.476496 1670843 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1222 01:32:48.517508 1670843 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1222 01:32:48.517575 1670843 kubeadm.go:319] [preflight] Running pre-flight checks
	I1222 01:32:48.594935 1670843 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1222 01:32:48.595008 1670843 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1222 01:32:48.595046 1670843 kubeadm.go:319] OS: Linux
	I1222 01:32:48.595095 1670843 kubeadm.go:319] CGROUPS_CPU: enabled
	I1222 01:32:48.595143 1670843 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1222 01:32:48.595192 1670843 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1222 01:32:48.595240 1670843 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1222 01:32:48.595288 1670843 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1222 01:32:48.595340 1670843 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1222 01:32:48.595387 1670843 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1222 01:32:48.595435 1670843 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1222 01:32:48.595482 1670843 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1222 01:32:48.660826 1670843 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1222 01:32:48.660943 1670843 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1222 01:32:48.661079 1670843 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1222 01:32:48.666682 1670843 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1222 01:32:48.672026 1670843 out.go:252]   - Generating certificates and keys ...
	I1222 01:32:48.672133 1670843 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1222 01:32:48.672215 1670843 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1222 01:32:48.672316 1670843 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1222 01:32:48.672398 1670843 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1222 01:32:48.672480 1670843 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1222 01:32:48.672546 1670843 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1222 01:32:48.672621 1670843 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1222 01:32:48.672694 1670843 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1222 01:32:48.672781 1670843 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1222 01:32:48.672898 1670843 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1222 01:32:48.672968 1670843 kubeadm.go:319] [certs] Using the existing "sa" key
	I1222 01:32:48.673051 1670843 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1222 01:32:48.931141 1670843 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1222 01:32:49.321960 1670843 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1222 01:32:49.787743 1670843 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1222 01:32:49.993441 1670843 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1222 01:32:50.084543 1670843 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1222 01:32:50.085011 1670843 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1222 01:32:50.087783 1670843 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1222 01:32:50.091167 1670843 out.go:252]   - Booting up control plane ...
	I1222 01:32:50.091277 1670843 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1222 01:32:50.091357 1670843 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1222 01:32:50.091431 1670843 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1222 01:32:50.113375 1670843 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1222 01:32:50.113487 1670843 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1222 01:32:50.122587 1670843 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1222 01:32:50.123919 1670843 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1222 01:32:50.124082 1670843 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1222 01:32:50.256676 1670843 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1222 01:32:50.256808 1670843 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1222 01:34:33.605118 1661698 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001202243s
	I1222 01:34:33.610423 1661698 kubeadm.go:319] 
	I1222 01:34:33.610510 1661698 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1222 01:34:33.610555 1661698 kubeadm.go:319] 	- The kubelet is not running
	I1222 01:34:33.610673 1661698 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1222 01:34:33.610685 1661698 kubeadm.go:319] 
	I1222 01:34:33.610798 1661698 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1222 01:34:33.610837 1661698 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1222 01:34:33.610875 1661698 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1222 01:34:33.610884 1661698 kubeadm.go:319] 
	I1222 01:34:33.611729 1661698 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1222 01:34:33.612160 1661698 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1222 01:34:33.612286 1661698 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1222 01:34:33.612615 1661698 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1222 01:34:33.612639 1661698 kubeadm.go:319] 
	I1222 01:34:33.612738 1661698 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1222 01:34:33.612826 1661698 kubeadm.go:403] duration metric: took 8m6.521308561s to StartCluster
	I1222 01:34:33.612869 1661698 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:34:33.612963 1661698 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:34:33.638994 1661698 cri.go:96] found id: ""
	I1222 01:34:33.639065 1661698 logs.go:282] 0 containers: []
	W1222 01:34:33.639100 1661698 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:34:33.639124 1661698 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:34:33.639214 1661698 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:34:33.664403 1661698 cri.go:96] found id: ""
	I1222 01:34:33.664427 1661698 logs.go:282] 0 containers: []
	W1222 01:34:33.664436 1661698 logs.go:284] No container was found matching "etcd"
	I1222 01:34:33.664446 1661698 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:34:33.664509 1661698 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:34:33.695712 1661698 cri.go:96] found id: ""
	I1222 01:34:33.695738 1661698 logs.go:282] 0 containers: []
	W1222 01:34:33.695748 1661698 logs.go:284] No container was found matching "coredns"
	I1222 01:34:33.695754 1661698 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:34:33.695824 1661698 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:34:33.725832 1661698 cri.go:96] found id: ""
	I1222 01:34:33.725860 1661698 logs.go:282] 0 containers: []
	W1222 01:34:33.725869 1661698 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:34:33.725877 1661698 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:34:33.725946 1661698 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:34:33.754495 1661698 cri.go:96] found id: ""
	I1222 01:34:33.754525 1661698 logs.go:282] 0 containers: []
	W1222 01:34:33.754545 1661698 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:34:33.754568 1661698 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:34:33.754673 1661698 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:34:33.781930 1661698 cri.go:96] found id: ""
	I1222 01:34:33.781958 1661698 logs.go:282] 0 containers: []
	W1222 01:34:33.781967 1661698 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:34:33.781974 1661698 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:34:33.782035 1661698 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:34:33.809335 1661698 cri.go:96] found id: ""
	I1222 01:34:33.809412 1661698 logs.go:282] 0 containers: []
	W1222 01:34:33.809435 1661698 logs.go:284] No container was found matching "kindnet"
	I1222 01:34:33.809461 1661698 logs.go:123] Gathering logs for kubelet ...
	I1222 01:34:33.809500 1661698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:34:33.870551 1661698 logs.go:123] Gathering logs for dmesg ...
	I1222 01:34:33.870590 1661698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:34:33.887403 1661698 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:34:33.887432 1661698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:34:33.956483 1661698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:34:33.946406    5431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:34:33.946913    5431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:34:33.948611    5431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:34:33.949424    5431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:34:33.951490    5431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:34:33.946406    5431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:34:33.946913    5431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:34:33.948611    5431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:34:33.949424    5431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:34:33.951490    5431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:34:33.956566 1661698 logs.go:123] Gathering logs for containerd ...
	I1222 01:34:33.956596 1661698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:34:34.001866 1661698 logs.go:123] Gathering logs for container status ...
	I1222 01:34:34.001912 1661698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 01:34:34.042014 1661698 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001202243s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1222 01:34:34.042151 1661698 out.go:285] * 
	W1222 01:34:34.042248 1661698 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001202243s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1222 01:34:34.042299 1661698 out.go:285] * 
	W1222 01:34:34.044850 1661698 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1222 01:34:34.050486 1661698 out.go:203] 
	W1222 01:34:34.053486 1661698 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001202243s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1222 01:34:34.053539 1661698 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1222 01:34:34.053562 1661698 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1222 01:34:34.056784 1661698 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 22 01:26:14 no-preload-154186 containerd[756]: time="2025-12-22T01:26:14.470410254Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.35.0-rc.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 22 01:26:15 no-preload-154186 containerd[756]: time="2025-12-22T01:26:15.787207960Z" level=info msg="No images store for sha256:93523640e0a56d4e8b1c8a3497b218ff0cad45dc41c5de367125514543645a73"
	Dec 22 01:26:15 no-preload-154186 containerd[756]: time="2025-12-22T01:26:15.789433827Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-rc.1\""
	Dec 22 01:26:15 no-preload-154186 containerd[756]: time="2025-12-22T01:26:15.803823042Z" level=info msg="ImageCreate event name:\"sha256:a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 22 01:26:15 no-preload-154186 containerd[756]: time="2025-12-22T01:26:15.804652777Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-rc.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 22 01:26:17 no-preload-154186 containerd[756]: time="2025-12-22T01:26:17.331340037Z" level=info msg="No images store for sha256:5e4a4fe83792bf529a4e283e09069cf50cc9882d04168a33903ed6809a492e61"
	Dec 22 01:26:17 no-preload-154186 containerd[756]: time="2025-12-22T01:26:17.334400636Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\""
	Dec 22 01:26:17 no-preload-154186 containerd[756]: time="2025-12-22T01:26:17.353661044Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 22 01:26:17 no-preload-154186 containerd[756]: time="2025-12-22T01:26:17.357804008Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 22 01:26:19 no-preload-154186 containerd[756]: time="2025-12-22T01:26:19.359874060Z" level=info msg="No images store for sha256:78d3927c747311a5af27ec923ab6d07a2c1ad9cff4754323abf6c5c08cf054a5"
	Dec 22 01:26:19 no-preload-154186 containerd[756]: time="2025-12-22T01:26:19.362802572Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\""
	Dec 22 01:26:19 no-preload-154186 containerd[756]: time="2025-12-22T01:26:19.384615077Z" level=info msg="ImageCreate event name:\"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 22 01:26:19 no-preload-154186 containerd[756]: time="2025-12-22T01:26:19.388118298Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 22 01:26:21 no-preload-154186 containerd[756]: time="2025-12-22T01:26:21.047922253Z" level=info msg="No images store for sha256:e78123e3dd3a833d4e1feffb3fc0a121f3dd689abacf9b7f8984f026b95c56ec"
	Dec 22 01:26:21 no-preload-154186 containerd[756]: time="2025-12-22T01:26:21.050246394Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.0-rc.1\""
	Dec 22 01:26:21 no-preload-154186 containerd[756]: time="2025-12-22T01:26:21.057661077Z" level=info msg="ImageCreate event name:\"sha256:7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 22 01:26:21 no-preload-154186 containerd[756]: time="2025-12-22T01:26:21.058387369Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.35.0-rc.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 22 01:26:22 no-preload-154186 containerd[756]: time="2025-12-22T01:26:22.695409612Z" level=info msg="No images store for sha256:90c4ca45066b118d6cc8f6102ba2fea77739b71c04f0bdafeef225127738ea35"
	Dec 22 01:26:22 no-preload-154186 containerd[756]: time="2025-12-22T01:26:22.698274402Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-rc.1\""
	Dec 22 01:26:22 no-preload-154186 containerd[756]: time="2025-12-22T01:26:22.719228692Z" level=info msg="ImageCreate event name:\"sha256:3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 22 01:26:22 no-preload-154186 containerd[756]: time="2025-12-22T01:26:22.720159212Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-rc.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 22 01:26:23 no-preload-154186 containerd[756]: time="2025-12-22T01:26:23.239290626Z" level=info msg="No images store for sha256:7475c7d18769df89a804d5bebf679dbf94886f3626f07a2be923beaa0cc7e5b0"
	Dec 22 01:26:23 no-preload-154186 containerd[756]: time="2025-12-22T01:26:23.241741545Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\""
	Dec 22 01:26:23 no-preload-154186 containerd[756]: time="2025-12-22T01:26:23.251782100Z" level=info msg="ImageCreate event name:\"sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 22 01:26:23 no-preload-154186 containerd[756]: time="2025-12-22T01:26:23.252093930Z" level=info msg="ImageUpdate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:34:35.158930    5552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:34:35.159768    5552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:34:35.160835    5552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:34:35.162559    5552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:34:35.163175    5552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec22 00:03] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 01:34:35 up 1 day,  8:17,  0 user,  load average: 0.62, 1.24, 1.94
	Linux no-preload-154186 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 22 01:34:32 no-preload-154186 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 01:34:32 no-preload-154186 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 318.
	Dec 22 01:34:32 no-preload-154186 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:34:32 no-preload-154186 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:34:32 no-preload-154186 kubelet[5359]: E1222 01:34:32.811059    5359 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 01:34:32 no-preload-154186 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 01:34:32 no-preload-154186 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 01:34:33 no-preload-154186 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 22 01:34:33 no-preload-154186 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:34:33 no-preload-154186 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:34:33 no-preload-154186 kubelet[5365]: E1222 01:34:33.553784    5365 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 01:34:33 no-preload-154186 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 01:34:33 no-preload-154186 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 01:34:34 no-preload-154186 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 22 01:34:34 no-preload-154186 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:34:34 no-preload-154186 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:34:34 no-preload-154186 kubelet[5449]: E1222 01:34:34.366562    5449 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 01:34:34 no-preload-154186 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 01:34:34 no-preload-154186 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 01:34:34 no-preload-154186 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 22 01:34:34 no-preload-154186 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:34:34 no-preload-154186 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:34:35 no-preload-154186 kubelet[5536]: E1222 01:34:35.080688    5536 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 01:34:35 no-preload-154186 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 01:34:35 no-preload-154186 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-154186 -n no-preload-154186
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-154186 -n no-preload-154186: exit status 6 (331.924556ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1222 01:34:35.628522 1678409 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-154186" does not appear in /home/jenkins/minikube-integration/22179-1395000/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "no-preload-154186" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (513.48s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (502.91s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-869293 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1
E1222 01:29:13.743149 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/old-k8s-version-433815/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:29:13.748492 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/old-k8s-version-433815/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:29:13.758786 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/old-k8s-version-433815/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:29:13.779080 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/old-k8s-version-433815/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:29:13.819353 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/old-k8s-version-433815/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:29:13.899785 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/old-k8s-version-433815/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:29:14.060275 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/old-k8s-version-433815/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:29:14.380961 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/old-k8s-version-433815/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:29:15.022133 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/old-k8s-version-433815/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:29:16.303319 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/old-k8s-version-433815/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:29:18.863619 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/old-k8s-version-433815/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:29:23.984552 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/old-k8s-version-433815/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:29:26.767434 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/default-k8s-diff-port-778490/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:29:26.772698 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/default-k8s-diff-port-778490/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:29:26.783029 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/default-k8s-diff-port-778490/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:29:26.803400 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/default-k8s-diff-port-778490/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:29:26.843767 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/default-k8s-diff-port-778490/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:29:26.924200 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/default-k8s-diff-port-778490/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:29:27.084674 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/default-k8s-diff-port-778490/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:29:27.405281 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/default-k8s-diff-port-778490/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:29:28.046340 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/default-k8s-diff-port-778490/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:29:29.326683 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/default-k8s-diff-port-778490/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:29:31.886981 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/default-k8s-diff-port-778490/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:29:34.224846 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/old-k8s-version-433815/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:29:37.007780 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/default-k8s-diff-port-778490/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:29:47.247982 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/default-k8s-diff-port-778490/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:29:54.705286 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/old-k8s-version-433815/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:30:07.728512 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/default-k8s-diff-port-778490/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:30:35.665571 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/old-k8s-version-433815/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:30:48.688819 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/default-k8s-diff-port-778490/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:30:56.758267 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:31:29.153642 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/addons-984861/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:31:57.586253 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/old-k8s-version-433815/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:32:10.609173 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/default-k8s-diff-port-778490/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:33:07.826609 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-722318/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:34:13.744497 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/old-k8s-version-433815/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:34:26.767484 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/default-k8s-diff-port-778490/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:34:32.214145 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/addons-984861/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p newest-cni-869293 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1: exit status 109 (8m21.211220732s)

                                                
                                                
-- stdout --
	* [newest-cni-869293] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22179
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22179-1395000/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1395000/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "newest-cni-869293" primary control-plane node in "newest-cni-869293" cluster
	* Pulling base image v0.0.48-1766219634-22260 ...
	* Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.1 ...
	  - kubeadm.pod-network-cidr=10.42.0.0/16
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1222 01:28:29.517235 1670843 out.go:360] Setting OutFile to fd 1 ...
	I1222 01:28:29.517360 1670843 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:28:29.517375 1670843 out.go:374] Setting ErrFile to fd 2...
	I1222 01:28:29.517381 1670843 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:28:29.517635 1670843 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1395000/.minikube/bin
	I1222 01:28:29.518139 1670843 out.go:368] Setting JSON to false
	I1222 01:28:29.519021 1670843 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":115862,"bootTime":1766251047,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1222 01:28:29.519085 1670843 start.go:143] virtualization:  
	I1222 01:28:29.523165 1670843 out.go:179] * [newest-cni-869293] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1222 01:28:29.526534 1670843 out.go:179]   - MINIKUBE_LOCATION=22179
	I1222 01:28:29.526612 1670843 notify.go:221] Checking for updates...
	I1222 01:28:29.533896 1670843 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 01:28:29.537080 1670843 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-1395000/kubeconfig
	I1222 01:28:29.540168 1670843 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1395000/.minikube
	I1222 01:28:29.543253 1670843 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1222 01:28:29.546250 1670843 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 01:28:29.549849 1670843 config.go:182] Loaded profile config "no-preload-154186": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1222 01:28:29.549971 1670843 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 01:28:29.575293 1670843 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1222 01:28:29.575440 1670843 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:28:29.641901 1670843 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 01:28:29.632088848 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:28:29.642009 1670843 docker.go:319] overlay module found
	I1222 01:28:29.645181 1670843 out.go:179] * Using the docker driver based on user configuration
	I1222 01:28:29.648076 1670843 start.go:309] selected driver: docker
	I1222 01:28:29.648097 1670843 start.go:928] validating driver "docker" against <nil>
	I1222 01:28:29.648110 1670843 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 01:28:29.648868 1670843 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:28:29.705361 1670843 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 01:28:29.695866952 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:28:29.705519 1670843 start_flags.go:329] no existing cluster config was found, will generate one from the flags 
	W1222 01:28:29.705566 1670843 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1222 01:28:29.705783 1670843 start_flags.go:1014] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1222 01:28:29.708430 1670843 out.go:179] * Using Docker driver with root privileges
	I1222 01:28:29.711256 1670843 cni.go:84] Creating CNI manager for ""
	I1222 01:28:29.711321 1670843 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1222 01:28:29.711336 1670843 start_flags.go:338] Found "CNI" CNI - setting NetworkPlugin=cni
	I1222 01:28:29.711424 1670843 start.go:353] cluster config:
	{Name:newest-cni-869293 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-869293 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:28:29.714525 1670843 out.go:179] * Starting "newest-cni-869293" primary control-plane node in "newest-cni-869293" cluster
	I1222 01:28:29.717251 1670843 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1222 01:28:29.720207 1670843 out.go:179] * Pulling base image v0.0.48-1766219634-22260 ...
	I1222 01:28:29.723048 1670843 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1222 01:28:29.723095 1670843 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4
	I1222 01:28:29.723110 1670843 cache.go:65] Caching tarball of preloaded images
	I1222 01:28:29.723137 1670843 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1222 01:28:29.723197 1670843 preload.go:251] Found /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1222 01:28:29.723207 1670843 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on containerd
	I1222 01:28:29.723316 1670843 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/config.json ...
	I1222 01:28:29.723334 1670843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/config.json: {Name:mk7d2be4f8d5fd1ff0598339a0c1f4c8dc1289c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:28:29.752728 1670843 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon, skipping pull
	I1222 01:28:29.752750 1670843 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 exists in daemon, skipping load
	I1222 01:28:29.752765 1670843 cache.go:243] Successfully downloaded all kic artifacts
	I1222 01:28:29.752806 1670843 start.go:360] acquireMachinesLock for newest-cni-869293: {Name:mke3b6ee6da4cf7fd78c9e3f2e52f52ceafca24c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:28:29.752907 1670843 start.go:364] duration metric: took 85.72µs to acquireMachinesLock for "newest-cni-869293"
	I1222 01:28:29.752938 1670843 start.go:93] Provisioning new machine with config: &{Name:newest-cni-869293 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-869293 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1222 01:28:29.753004 1670843 start.go:125] createHost starting for "" (driver="docker")
	I1222 01:28:29.756462 1670843 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1222 01:28:29.756690 1670843 start.go:159] libmachine.API.Create for "newest-cni-869293" (driver="docker")
	I1222 01:28:29.756724 1670843 client.go:173] LocalClient.Create starting
	I1222 01:28:29.756801 1670843 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem
	I1222 01:28:29.756843 1670843 main.go:144] libmachine: Decoding PEM data...
	I1222 01:28:29.756861 1670843 main.go:144] libmachine: Parsing certificate...
	I1222 01:28:29.756915 1670843 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem
	I1222 01:28:29.756931 1670843 main.go:144] libmachine: Decoding PEM data...
	I1222 01:28:29.756942 1670843 main.go:144] libmachine: Parsing certificate...
	I1222 01:28:29.757315 1670843 cli_runner.go:164] Run: docker network inspect newest-cni-869293 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1222 01:28:29.774941 1670843 cli_runner.go:211] docker network inspect newest-cni-869293 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1222 01:28:29.775018 1670843 network_create.go:284] running [docker network inspect newest-cni-869293] to gather additional debugging logs...
	I1222 01:28:29.775034 1670843 cli_runner.go:164] Run: docker network inspect newest-cni-869293
	W1222 01:28:29.790350 1670843 cli_runner.go:211] docker network inspect newest-cni-869293 returned with exit code 1
	I1222 01:28:29.790377 1670843 network_create.go:287] error running [docker network inspect newest-cni-869293]: docker network inspect newest-cni-869293: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-869293 not found
	I1222 01:28:29.790389 1670843 network_create.go:289] output of [docker network inspect newest-cni-869293]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-869293 not found
	
	** /stderr **
	I1222 01:28:29.790489 1670843 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 01:28:29.811617 1670843 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-4235b17e4b35 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:fe:6e:e2:e5:a2:3e} reservation:<nil>}
	I1222 01:28:29.811973 1670843 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-f8ef64c3e2a9 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:e2:c4:2a:ab:ba:65} reservation:<nil>}
	I1222 01:28:29.812349 1670843 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-abf79bf53636 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:c2:97:f6:b2:21:18} reservation:<nil>}
	I1222 01:28:29.812832 1670843 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019c3f50}
	I1222 01:28:29.812860 1670843 network_create.go:124] attempt to create docker network newest-cni-869293 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1222 01:28:29.812925 1670843 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-869293 newest-cni-869293
	I1222 01:28:29.869740 1670843 network_create.go:108] docker network newest-cni-869293 192.168.76.0/24 created
	I1222 01:28:29.869775 1670843 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-869293" container
	I1222 01:28:29.869851 1670843 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1222 01:28:29.886155 1670843 cli_runner.go:164] Run: docker volume create newest-cni-869293 --label name.minikube.sigs.k8s.io=newest-cni-869293 --label created_by.minikube.sigs.k8s.io=true
	I1222 01:28:29.904602 1670843 oci.go:103] Successfully created a docker volume newest-cni-869293
	I1222 01:28:29.904703 1670843 cli_runner.go:164] Run: docker run --rm --name newest-cni-869293-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-869293 --entrypoint /usr/bin/test -v newest-cni-869293:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -d /var/lib
	I1222 01:28:30.526342 1670843 oci.go:107] Successfully prepared a docker volume newest-cni-869293
	I1222 01:28:30.526403 1670843 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1222 01:28:30.526413 1670843 kic.go:194] Starting extracting preloaded images to volume ...
	I1222 01:28:30.526485 1670843 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-869293:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -I lz4 -xf /preloaded.tar -C /extractDir
	I1222 01:28:35.482640 1670843 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-869293:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -I lz4 -xf /preloaded.tar -C /extractDir: (4.956113773s)
	I1222 01:28:35.482672 1670843 kic.go:203] duration metric: took 4.956255379s to extract preloaded images to volume ...
	W1222 01:28:35.482805 1670843 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1222 01:28:35.482926 1670843 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1222 01:28:35.547405 1670843 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-869293 --name newest-cni-869293 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-869293 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-869293 --network newest-cni-869293 --ip 192.168.76.2 --volume newest-cni-869293:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5
	I1222 01:28:35.858405 1670843 cli_runner.go:164] Run: docker container inspect newest-cni-869293 --format={{.State.Running}}
	I1222 01:28:35.884175 1670843 cli_runner.go:164] Run: docker container inspect newest-cni-869293 --format={{.State.Status}}
	I1222 01:28:35.909405 1670843 cli_runner.go:164] Run: docker exec newest-cni-869293 stat /var/lib/dpkg/alternatives/iptables
	I1222 01:28:35.959724 1670843 oci.go:144] the created container "newest-cni-869293" has a running status.
	I1222 01:28:35.959757 1670843 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/newest-cni-869293/id_rsa...
	I1222 01:28:36.206517 1670843 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/newest-cni-869293/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1222 01:28:36.231277 1670843 cli_runner.go:164] Run: docker container inspect newest-cni-869293 --format={{.State.Status}}
	I1222 01:28:36.257362 1670843 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1222 01:28:36.257407 1670843 kic_runner.go:114] Args: [docker exec --privileged newest-cni-869293 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1222 01:28:36.342107 1670843 cli_runner.go:164] Run: docker container inspect newest-cni-869293 --format={{.State.Status}}
	I1222 01:28:36.366444 1670843 machine.go:94] provisionDockerMachine start ...
	I1222 01:28:36.366556 1670843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:28:36.385946 1670843 main.go:144] libmachine: Using SSH client type: native
	I1222 01:28:36.386403 1670843 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38695 <nil> <nil>}
	I1222 01:28:36.386423 1670843 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 01:28:36.387088 1670843 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:39762->127.0.0.1:38695: read: connection reset by peer
	I1222 01:28:39.522339 1670843 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-869293
	
	I1222 01:28:39.522366 1670843 ubuntu.go:182] provisioning hostname "newest-cni-869293"
	I1222 01:28:39.522451 1670843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:28:39.547399 1670843 main.go:144] libmachine: Using SSH client type: native
	I1222 01:28:39.547774 1670843 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38695 <nil> <nil>}
	I1222 01:28:39.547786 1670843 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-869293 && echo "newest-cni-869293" | sudo tee /etc/hostname
	I1222 01:28:39.694424 1670843 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-869293
	
	I1222 01:28:39.694503 1670843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:28:39.714200 1670843 main.go:144] libmachine: Using SSH client type: native
	I1222 01:28:39.714526 1670843 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38695 <nil> <nil>}
	I1222 01:28:39.714551 1670843 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-869293' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-869293/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-869293' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1222 01:28:39.850447 1670843 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 01:28:39.850500 1670843 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22179-1395000/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-1395000/.minikube}
	I1222 01:28:39.850540 1670843 ubuntu.go:190] setting up certificates
	I1222 01:28:39.850552 1670843 provision.go:84] configureAuth start
	I1222 01:28:39.850620 1670843 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-869293
	I1222 01:28:39.867785 1670843 provision.go:143] copyHostCerts
	I1222 01:28:39.867858 1670843 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.pem, removing ...
	I1222 01:28:39.867874 1670843 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.pem
	I1222 01:28:39.867957 1670843 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.pem (1082 bytes)
	I1222 01:28:39.868053 1670843 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1395000/.minikube/cert.pem, removing ...
	I1222 01:28:39.868064 1670843 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1395000/.minikube/cert.pem
	I1222 01:28:39.868091 1670843 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-1395000/.minikube/cert.pem (1123 bytes)
	I1222 01:28:39.868150 1670843 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1395000/.minikube/key.pem, removing ...
	I1222 01:28:39.868160 1670843 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1395000/.minikube/key.pem
	I1222 01:28:39.868186 1670843 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-1395000/.minikube/key.pem (1679 bytes)
	I1222 01:28:39.868234 1670843 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca-key.pem org=jenkins.newest-cni-869293 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-869293]
	I1222 01:28:40.422763 1670843 provision.go:177] copyRemoteCerts
	I1222 01:28:40.422844 1670843 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1222 01:28:40.422895 1670843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:28:40.440513 1670843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38695 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/newest-cni-869293/id_rsa Username:docker}
	I1222 01:28:40.538074 1670843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1222 01:28:40.556414 1670843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1222 01:28:40.575373 1670843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1222 01:28:40.593558 1670843 provision.go:87] duration metric: took 742.986656ms to configureAuth
	I1222 01:28:40.593586 1670843 ubuntu.go:206] setting minikube options for container-runtime
	I1222 01:28:40.593787 1670843 config.go:182] Loaded profile config "newest-cni-869293": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1222 01:28:40.593803 1670843 machine.go:97] duration metric: took 4.22733559s to provisionDockerMachine
	I1222 01:28:40.593812 1670843 client.go:176] duration metric: took 10.837081515s to LocalClient.Create
	I1222 01:28:40.593832 1670843 start.go:167] duration metric: took 10.837143899s to libmachine.API.Create "newest-cni-869293"
	I1222 01:28:40.593852 1670843 start.go:293] postStartSetup for "newest-cni-869293" (driver="docker")
	I1222 01:28:40.593867 1670843 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1222 01:28:40.593917 1670843 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1222 01:28:40.593967 1670843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:28:40.610836 1670843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38695 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/newest-cni-869293/id_rsa Username:docker}
	I1222 01:28:40.706406 1670843 ssh_runner.go:195] Run: cat /etc/os-release
	I1222 01:28:40.710036 1670843 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1222 01:28:40.710063 1670843 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1222 01:28:40.710074 1670843 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1395000/.minikube/addons for local assets ...
	I1222 01:28:40.710147 1670843 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1395000/.minikube/files for local assets ...
	I1222 01:28:40.710227 1670843 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem -> 13968642.pem in /etc/ssl/certs
	I1222 01:28:40.710335 1670843 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1222 01:28:40.718359 1670843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem --> /etc/ssl/certs/13968642.pem (1708 bytes)
	I1222 01:28:40.738021 1670843 start.go:296] duration metric: took 144.14894ms for postStartSetup
	I1222 01:28:40.738502 1670843 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-869293
	I1222 01:28:40.756061 1670843 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/config.json ...
	I1222 01:28:40.756360 1670843 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 01:28:40.756410 1670843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:28:40.773896 1670843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38695 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/newest-cni-869293/id_rsa Username:docker}
	I1222 01:28:40.867396 1670843 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1222 01:28:40.872357 1670843 start.go:128] duration metric: took 11.119339666s to createHost
	I1222 01:28:40.872384 1670843 start.go:83] releasing machines lock for "newest-cni-869293", held for 11.11946774s
	I1222 01:28:40.872459 1670843 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-869293
	I1222 01:28:40.890060 1670843 ssh_runner.go:195] Run: cat /version.json
	I1222 01:28:40.890150 1670843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:28:40.890413 1670843 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1222 01:28:40.890480 1670843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:28:40.915709 1670843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38695 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/newest-cni-869293/id_rsa Username:docker}
	I1222 01:28:40.935746 1670843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38695 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/newest-cni-869293/id_rsa Username:docker}
	I1222 01:28:41.107089 1670843 ssh_runner.go:195] Run: systemctl --version
	I1222 01:28:41.113703 1670843 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1222 01:28:41.118148 1670843 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1222 01:28:41.118236 1670843 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1222 01:28:41.147883 1670843 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1222 01:28:41.147909 1670843 start.go:496] detecting cgroup driver to use...
	I1222 01:28:41.147968 1670843 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 01:28:41.148044 1670843 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1222 01:28:41.163654 1670843 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1222 01:28:41.177388 1670843 docker.go:218] disabling cri-docker service (if available) ...
	I1222 01:28:41.177464 1670843 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1222 01:28:41.195478 1670843 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1222 01:28:41.214648 1670843 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1222 01:28:41.335255 1670843 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1222 01:28:41.459740 1670843 docker.go:234] disabling docker service ...
	I1222 01:28:41.459818 1670843 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1222 01:28:41.481154 1670843 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1222 01:28:41.495828 1670843 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1222 01:28:41.616188 1670843 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1222 01:28:41.741458 1670843 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1222 01:28:41.755996 1670843 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 01:28:41.769875 1670843 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1222 01:28:41.778912 1670843 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1222 01:28:41.787547 1670843 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1222 01:28:41.787619 1670843 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1222 01:28:41.796129 1670843 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1222 01:28:41.804747 1670843 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1222 01:28:41.813988 1670843 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1222 01:28:41.823382 1670843 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1222 01:28:41.831512 1670843 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1222 01:28:41.840674 1670843 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1222 01:28:41.849663 1670843 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1222 01:28:41.859033 1670843 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1222 01:28:41.866669 1670843 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1222 01:28:41.874404 1670843 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:28:41.996020 1670843 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1222 01:28:42.147980 1670843 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1222 01:28:42.148078 1670843 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1222 01:28:42.153206 1670843 start.go:564] Will wait 60s for crictl version
	I1222 01:28:42.153310 1670843 ssh_runner.go:195] Run: which crictl
	I1222 01:28:42.158111 1670843 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1222 01:28:42.188961 1670843 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.1
	RuntimeApiVersion:  v1
	I1222 01:28:42.189058 1670843 ssh_runner.go:195] Run: containerd --version
	I1222 01:28:42.212951 1670843 ssh_runner.go:195] Run: containerd --version
	I1222 01:28:42.242832 1670843 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.1 ...
	I1222 01:28:42.245962 1670843 cli_runner.go:164] Run: docker network inspect newest-cni-869293 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 01:28:42.263401 1670843 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1222 01:28:42.267705 1670843 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 01:28:42.281779 1670843 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1222 01:28:42.284630 1670843 kubeadm.go:884] updating cluster {Name:newest-cni-869293 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-869293 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1222 01:28:42.284796 1670843 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1222 01:28:42.284882 1670843 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 01:28:42.315646 1670843 containerd.go:627] all images are preloaded for containerd runtime.
	I1222 01:28:42.315686 1670843 containerd.go:534] Images already preloaded, skipping extraction
	I1222 01:28:42.315760 1670843 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 01:28:42.343479 1670843 containerd.go:627] all images are preloaded for containerd runtime.
	I1222 01:28:42.343504 1670843 cache_images.go:86] Images are preloaded, skipping loading
	I1222 01:28:42.343513 1670843 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-rc.1 containerd true true} ...
	I1222 01:28:42.343653 1670843 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-869293 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-869293 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1222 01:28:42.343729 1670843 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
	I1222 01:28:42.368317 1670843 cni.go:84] Creating CNI manager for ""
	I1222 01:28:42.368388 1670843 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1222 01:28:42.368427 1670843 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1222 01:28:42.368461 1670843 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-869293 NodeName:newest-cni-869293 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1222 01:28:42.368600 1670843 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-869293"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1222 01:28:42.368678 1670843 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1222 01:28:42.376805 1670843 binaries.go:51] Found k8s binaries, skipping transfer
	I1222 01:28:42.376907 1670843 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1222 01:28:42.384913 1670843 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1222 01:28:42.398203 1670843 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1222 01:28:42.412066 1670843 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2233 bytes)
	I1222 01:28:42.425146 1670843 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1222 01:28:42.428711 1670843 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 01:28:42.438586 1670843 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:28:42.581997 1670843 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 01:28:42.598481 1670843 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293 for IP: 192.168.76.2
	I1222 01:28:42.598505 1670843 certs.go:195] generating shared ca certs ...
	I1222 01:28:42.598523 1670843 certs.go:227] acquiring lock for ca certs: {Name:mk4e1172e73c8d9b926824a39d7e920772302ed7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:28:42.598712 1670843 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.key
	I1222 01:28:42.598780 1670843 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.key
	I1222 01:28:42.598795 1670843 certs.go:257] generating profile certs ...
	I1222 01:28:42.598868 1670843 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/client.key
	I1222 01:28:42.598888 1670843 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/client.crt with IP's: []
	I1222 01:28:43.368024 1670843 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/client.crt ...
	I1222 01:28:43.368059 1670843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/client.crt: {Name:mkfc3a338fdb42add5491ce4694522898b79b83c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:28:43.368262 1670843 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/client.key ...
	I1222 01:28:43.368276 1670843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/client.key: {Name:mkea74dd50bc644b440bafb99fc54190912b7665 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:28:43.368378 1670843 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/apiserver.key.70db33ce
	I1222 01:28:43.368397 1670843 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/apiserver.crt.70db33ce with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1222 01:28:43.608821 1670843 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/apiserver.crt.70db33ce ...
	I1222 01:28:43.608852 1670843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/apiserver.crt.70db33ce: {Name:mk0db6b3e8c9bf7aff940b44fd05b130d9d585d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:28:43.609048 1670843 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/apiserver.key.70db33ce ...
	I1222 01:28:43.609064 1670843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/apiserver.key.70db33ce: {Name:mk9a3f763ae7146332940a9b4d9169402652e2d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:28:43.609162 1670843 certs.go:382] copying /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/apiserver.crt.70db33ce -> /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/apiserver.crt
	I1222 01:28:43.609244 1670843 certs.go:386] copying /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/apiserver.key.70db33ce -> /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/apiserver.key
	I1222 01:28:43.609298 1670843 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/proxy-client.key
	I1222 01:28:43.609318 1670843 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/proxy-client.crt with IP's: []
	I1222 01:28:43.826780 1670843 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/proxy-client.crt ...
	I1222 01:28:43.826812 1670843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/proxy-client.crt: {Name:mk2b8cedfe513097eb57f8b68379ebde37c90b21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:28:43.827666 1670843 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/proxy-client.key ...
	I1222 01:28:43.827685 1670843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/proxy-client.key: {Name:mk5be442e41d6696d708120ad1b125b0231d124b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:28:43.827919 1670843 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/1396864.pem (1338 bytes)
	W1222 01:28:43.827970 1670843 certs.go:480] ignoring /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/1396864_empty.pem, impossibly tiny 0 bytes
	I1222 01:28:43.827983 1670843 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca-key.pem (1675 bytes)
	I1222 01:28:43.828011 1670843 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem (1082 bytes)
	I1222 01:28:43.828040 1670843 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem (1123 bytes)
	I1222 01:28:43.828064 1670843 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/key.pem (1679 bytes)
	I1222 01:28:43.828110 1670843 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem (1708 bytes)
	I1222 01:28:43.828781 1670843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1222 01:28:43.849273 1670843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1222 01:28:43.869748 1670843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1222 01:28:43.888552 1670843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1222 01:28:43.907953 1670843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1222 01:28:43.926293 1670843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1222 01:28:43.944116 1670843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1222 01:28:43.961772 1670843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1222 01:28:43.979659 1670843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1222 01:28:44.001330 1670843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/1396864.pem --> /usr/share/ca-certificates/1396864.pem (1338 bytes)
	I1222 01:28:44.024142 1670843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem --> /usr/share/ca-certificates/13968642.pem (1708 bytes)
	I1222 01:28:44.045144 1670843 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1222 01:28:44.059777 1670843 ssh_runner.go:195] Run: openssl version
	I1222 01:28:44.066272 1670843 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:28:44.074265 1670843 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1222 01:28:44.081820 1670843 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:28:44.085681 1670843 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 00:04 /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:28:44.085753 1670843 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:28:44.127374 1670843 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1222 01:28:44.135158 1670843 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1222 01:28:44.143793 1670843 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1396864.pem
	I1222 01:28:44.151667 1670843 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1396864.pem /etc/ssl/certs/1396864.pem
	I1222 01:28:44.159424 1670843 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1396864.pem
	I1222 01:28:44.163331 1670843 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 00:13 /usr/share/ca-certificates/1396864.pem
	I1222 01:28:44.163415 1670843 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1396864.pem
	I1222 01:28:44.204528 1670843 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1222 01:28:44.212004 1670843 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1396864.pem /etc/ssl/certs/51391683.0
	I1222 01:28:44.219747 1670843 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/13968642.pem
	I1222 01:28:44.227544 1670843 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/13968642.pem /etc/ssl/certs/13968642.pem
	I1222 01:28:44.235151 1670843 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13968642.pem
	I1222 01:28:44.239441 1670843 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 00:13 /usr/share/ca-certificates/13968642.pem
	I1222 01:28:44.239512 1670843 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13968642.pem
	I1222 01:28:44.281186 1670843 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1222 01:28:44.288873 1670843 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/13968642.pem /etc/ssl/certs/3ec20f2e.0
	I1222 01:28:44.296839 1670843 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 01:28:44.300500 1670843 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1222 01:28:44.300563 1670843 kubeadm.go:401] StartCluster: {Name:newest-cni-869293 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-869293 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:28:44.300658 1670843 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1222 01:28:44.300723 1670843 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 01:28:44.328761 1670843 cri.go:96] found id: ""
	I1222 01:28:44.328834 1670843 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1222 01:28:44.336639 1670843 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1222 01:28:44.344992 1670843 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1222 01:28:44.345060 1670843 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 01:28:44.353083 1670843 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1222 01:28:44.353102 1670843 kubeadm.go:158] found existing configuration files:
	
	I1222 01:28:44.353165 1670843 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1222 01:28:44.361047 1670843 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1222 01:28:44.361129 1670843 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1222 01:28:44.368921 1670843 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1222 01:28:44.377002 1670843 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1222 01:28:44.377072 1670843 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 01:28:44.384992 1670843 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1222 01:28:44.393081 1670843 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1222 01:28:44.393153 1670843 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 01:28:44.400779 1670843 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1222 01:28:44.408653 1670843 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1222 01:28:44.408723 1670843 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 01:28:44.416474 1670843 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1222 01:28:44.452812 1670843 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1222 01:28:44.452877 1670843 kubeadm.go:319] [preflight] Running pre-flight checks
	I1222 01:28:44.531198 1670843 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1222 01:28:44.531307 1670843 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1222 01:28:44.531355 1670843 kubeadm.go:319] OS: Linux
	I1222 01:28:44.531413 1670843 kubeadm.go:319] CGROUPS_CPU: enabled
	I1222 01:28:44.531466 1670843 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1222 01:28:44.531524 1670843 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1222 01:28:44.531590 1670843 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1222 01:28:44.531650 1670843 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1222 01:28:44.531733 1670843 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1222 01:28:44.531797 1670843 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1222 01:28:44.531864 1670843 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1222 01:28:44.531927 1670843 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1222 01:28:44.604053 1670843 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1222 01:28:44.604174 1670843 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1222 01:28:44.604270 1670843 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1222 01:28:44.614524 1670843 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1222 01:28:44.617569 1670843 out.go:252]   - Generating certificates and keys ...
	I1222 01:28:44.617679 1670843 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1222 01:28:44.617781 1670843 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1222 01:28:44.891888 1670843 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1222 01:28:45.152805 1670843 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1222 01:28:45.274684 1670843 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1222 01:28:45.517117 1670843 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1222 01:28:45.648291 1670843 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1222 01:28:45.648639 1670843 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-869293] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1222 01:28:45.933782 1670843 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1222 01:28:45.933931 1670843 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-869293] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1222 01:28:46.072331 1670843 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1222 01:28:46.408818 1670843 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1222 01:28:46.502126 1670843 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1222 01:28:46.502542 1670843 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1222 01:28:46.995871 1670843 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1222 01:28:47.191545 1670843 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1222 01:28:47.264763 1670843 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1222 01:28:47.533721 1670843 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1222 01:28:47.788353 1670843 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1222 01:28:47.789301 1670843 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1222 01:28:47.796939 1670843 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1222 01:28:47.800747 1670843 out.go:252]   - Booting up control plane ...
	I1222 01:28:47.800866 1670843 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1222 01:28:47.807894 1670843 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1222 01:28:47.807975 1670843 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1222 01:28:47.826994 1670843 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1222 01:28:47.827460 1670843 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1222 01:28:47.835465 1670843 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1222 01:28:47.835861 1670843 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1222 01:28:47.836121 1670843 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1222 01:28:47.973178 1670843 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1222 01:28:47.973389 1670843 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1222 01:32:47.973116 1670843 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000283341s
	I1222 01:32:47.973157 1670843 kubeadm.go:319] 
	I1222 01:32:47.973324 1670843 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1222 01:32:47.973386 1670843 kubeadm.go:319] 	- The kubelet is not running
	I1222 01:32:47.973953 1670843 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1222 01:32:47.974135 1670843 kubeadm.go:319] 
	I1222 01:32:47.974333 1670843 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1222 01:32:47.974391 1670843 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1222 01:32:47.974447 1670843 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1222 01:32:47.974452 1670843 kubeadm.go:319] 
	I1222 01:32:47.979596 1670843 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1222 01:32:47.980069 1670843 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1222 01:32:47.980186 1670843 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1222 01:32:47.980594 1670843 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1222 01:32:47.980620 1670843 kubeadm.go:319] 
	I1222 01:32:47.980717 1670843 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1222 01:32:47.980850 1670843 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-869293] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-869293] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000283341s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-869293] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-869293] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000283341s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1222 01:32:47.980937 1670843 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1222 01:32:48.391513 1670843 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 01:32:48.405347 1670843 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1222 01:32:48.405424 1670843 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 01:32:48.413621 1670843 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1222 01:32:48.413642 1670843 kubeadm.go:158] found existing configuration files:
	
	I1222 01:32:48.413694 1670843 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1222 01:32:48.421650 1670843 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1222 01:32:48.421714 1670843 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1222 01:32:48.429403 1670843 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1222 01:32:48.437071 1670843 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1222 01:32:48.437146 1670843 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 01:32:48.444785 1670843 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1222 01:32:48.452627 1670843 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1222 01:32:48.452694 1670843 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 01:32:48.460251 1670843 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1222 01:32:48.468521 1670843 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1222 01:32:48.468599 1670843 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 01:32:48.476496 1670843 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1222 01:32:48.517508 1670843 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1222 01:32:48.517575 1670843 kubeadm.go:319] [preflight] Running pre-flight checks
	I1222 01:32:48.594935 1670843 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1222 01:32:48.595008 1670843 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1222 01:32:48.595046 1670843 kubeadm.go:319] OS: Linux
	I1222 01:32:48.595095 1670843 kubeadm.go:319] CGROUPS_CPU: enabled
	I1222 01:32:48.595143 1670843 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1222 01:32:48.595192 1670843 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1222 01:32:48.595240 1670843 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1222 01:32:48.595288 1670843 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1222 01:32:48.595340 1670843 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1222 01:32:48.595387 1670843 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1222 01:32:48.595435 1670843 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1222 01:32:48.595482 1670843 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1222 01:32:48.660826 1670843 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1222 01:32:48.660943 1670843 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1222 01:32:48.661079 1670843 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1222 01:32:48.666682 1670843 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1222 01:32:48.672026 1670843 out.go:252]   - Generating certificates and keys ...
	I1222 01:32:48.672133 1670843 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1222 01:32:48.672215 1670843 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1222 01:32:48.672316 1670843 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1222 01:32:48.672398 1670843 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1222 01:32:48.672480 1670843 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1222 01:32:48.672546 1670843 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1222 01:32:48.672621 1670843 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1222 01:32:48.672694 1670843 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1222 01:32:48.672781 1670843 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1222 01:32:48.672898 1670843 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1222 01:32:48.672968 1670843 kubeadm.go:319] [certs] Using the existing "sa" key
	I1222 01:32:48.673051 1670843 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1222 01:32:48.931141 1670843 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1222 01:32:49.321960 1670843 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1222 01:32:49.787743 1670843 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1222 01:32:49.993441 1670843 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1222 01:32:50.084543 1670843 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1222 01:32:50.085011 1670843 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1222 01:32:50.087783 1670843 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1222 01:32:50.091167 1670843 out.go:252]   - Booting up control plane ...
	I1222 01:32:50.091277 1670843 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1222 01:32:50.091357 1670843 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1222 01:32:50.091431 1670843 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1222 01:32:50.113375 1670843 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1222 01:32:50.113487 1670843 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1222 01:32:50.122587 1670843 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1222 01:32:50.123919 1670843 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1222 01:32:50.124082 1670843 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1222 01:32:50.256676 1670843 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1222 01:32:50.256808 1670843 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1222 01:36:50.255531 1670843 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000628133s
	I1222 01:36:50.255560 1670843 kubeadm.go:319] 
	I1222 01:36:50.255614 1670843 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1222 01:36:50.255656 1670843 kubeadm.go:319] 	- The kubelet is not running
	I1222 01:36:50.255761 1670843 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1222 01:36:50.255770 1670843 kubeadm.go:319] 
	I1222 01:36:50.255874 1670843 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1222 01:36:50.255911 1670843 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1222 01:36:50.255945 1670843 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1222 01:36:50.255956 1670843 kubeadm.go:319] 
	I1222 01:36:50.263125 1670843 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1222 01:36:50.263638 1670843 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1222 01:36:50.263757 1670843 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1222 01:36:50.264047 1670843 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1222 01:36:50.264076 1670843 kubeadm.go:319] 
	I1222 01:36:50.264231 1670843 kubeadm.go:403] duration metric: took 8m5.963674476s to StartCluster
	I1222 01:36:50.264284 1670843 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:36:50.264365 1670843 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:36:50.264448 1670843 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1222 01:36:50.290175 1670843 cri.go:96] found id: ""
	I1222 01:36:50.290209 1670843 logs.go:282] 0 containers: []
	W1222 01:36:50.290218 1670843 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:36:50.290226 1670843 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:36:50.290294 1670843 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:36:50.322949 1670843 cri.go:96] found id: ""
	I1222 01:36:50.322982 1670843 logs.go:282] 0 containers: []
	W1222 01:36:50.322992 1670843 logs.go:284] No container was found matching "etcd"
	I1222 01:36:50.322998 1670843 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:36:50.323057 1670843 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:36:50.348796 1670843 cri.go:96] found id: ""
	I1222 01:36:50.348823 1670843 logs.go:282] 0 containers: []
	W1222 01:36:50.348833 1670843 logs.go:284] No container was found matching "coredns"
	I1222 01:36:50.348839 1670843 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:36:50.348963 1670843 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:36:50.377272 1670843 cri.go:96] found id: ""
	I1222 01:36:50.377300 1670843 logs.go:282] 0 containers: []
	W1222 01:36:50.377309 1670843 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:36:50.377316 1670843 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:36:50.377378 1670843 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:36:50.402189 1670843 cri.go:96] found id: ""
	I1222 01:36:50.402213 1670843 logs.go:282] 0 containers: []
	W1222 01:36:50.402222 1670843 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:36:50.402228 1670843 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:36:50.402290 1670843 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:36:50.426628 1670843 cri.go:96] found id: ""
	I1222 01:36:50.426656 1670843 logs.go:282] 0 containers: []
	W1222 01:36:50.426666 1670843 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:36:50.426674 1670843 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:36:50.426736 1670843 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:36:50.455040 1670843 cri.go:96] found id: ""
	I1222 01:36:50.455066 1670843 logs.go:282] 0 containers: []
	W1222 01:36:50.455076 1670843 logs.go:284] No container was found matching "kindnet"
	I1222 01:36:50.455086 1670843 logs.go:123] Gathering logs for kubelet ...
	I1222 01:36:50.455098 1670843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:36:50.511757 1670843 logs.go:123] Gathering logs for dmesg ...
	I1222 01:36:50.511795 1670843 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:36:50.527148 1670843 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:36:50.527182 1670843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:36:50.590231 1670843 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:36:50.581376    4829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:36:50.582223    4829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:36:50.583755    4829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:36:50.584210    4829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:36:50.585755    4829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:36:50.581376    4829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:36:50.582223    4829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:36:50.583755    4829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:36:50.584210    4829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:36:50.585755    4829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:36:50.590258 1670843 logs.go:123] Gathering logs for containerd ...
	I1222 01:36:50.590270 1670843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:36:50.629028 1670843 logs.go:123] Gathering logs for container status ...
	I1222 01:36:50.629065 1670843 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 01:36:50.657217 1670843 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000628133s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1222 01:36:50.657266 1670843 out.go:285] * 
	* 
	W1222 01:36:50.657314 1670843 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000628133s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000628133s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1222 01:36:50.657331 1670843 out.go:285] * 
	* 
	W1222 01:36:50.659448 1670843 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1222 01:36:50.664218 1670843 out.go:203] 
	W1222 01:36:50.668072 1670843 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000628133s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000628133s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1222 01:36:50.668130 1670843 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1222 01:36:50.668162 1670843 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1222 01:36:50.671361 1670843 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-linux-arm64 start -p newest-cni-869293 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1": exit status 109
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-869293
helpers_test.go:244: (dbg) docker inspect newest-cni-869293:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "05e1fe12904ba3e2f69bb80f55f1eaac2e2f59b2b380c077c133943a7ff0a16e",
	        "Created": "2025-12-22T01:28:35.561963158Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1671292,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-22T01:28:35.62747581Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:065a636b8735485f57df2b02ed6532902f189a9c5dc304ae0ae68a778e1c9b2c",
	        "ResolvConfPath": "/var/lib/docker/containers/05e1fe12904ba3e2f69bb80f55f1eaac2e2f59b2b380c077c133943a7ff0a16e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/05e1fe12904ba3e2f69bb80f55f1eaac2e2f59b2b380c077c133943a7ff0a16e/hostname",
	        "HostsPath": "/var/lib/docker/containers/05e1fe12904ba3e2f69bb80f55f1eaac2e2f59b2b380c077c133943a7ff0a16e/hosts",
	        "LogPath": "/var/lib/docker/containers/05e1fe12904ba3e2f69bb80f55f1eaac2e2f59b2b380c077c133943a7ff0a16e/05e1fe12904ba3e2f69bb80f55f1eaac2e2f59b2b380c077c133943a7ff0a16e-json.log",
	        "Name": "/newest-cni-869293",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-869293:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-869293",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "05e1fe12904ba3e2f69bb80f55f1eaac2e2f59b2b380c077c133943a7ff0a16e",
	                "LowerDir": "/var/lib/docker/overlay2/158fbaf3ed5d82f864f18e0e961f02de684f670ee24faf71f1ec33887e356946-init/diff:/var/lib/docker/overlay2/fdfc0dccbb3b6766b7981a5669907f62e120bbe774767ccbee34e6115374625f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/158fbaf3ed5d82f864f18e0e961f02de684f670ee24faf71f1ec33887e356946/merged",
	                "UpperDir": "/var/lib/docker/overlay2/158fbaf3ed5d82f864f18e0e961f02de684f670ee24faf71f1ec33887e356946/diff",
	                "WorkDir": "/var/lib/docker/overlay2/158fbaf3ed5d82f864f18e0e961f02de684f670ee24faf71f1ec33887e356946/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-869293",
	                "Source": "/var/lib/docker/volumes/newest-cni-869293/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-869293",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-869293",
	                "name.minikube.sigs.k8s.io": "newest-cni-869293",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "edd3d6fd3544b1c59cd2b427c94606af7bf1f69297eb5ee2ee5ccea43b72aa42",
	            "SandboxKey": "/var/run/docker/netns/edd3d6fd3544",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38695"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38697"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38701"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38699"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38700"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-869293": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9a:ea:31:73:c7:03",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "237b6ac5b33ea8f647685859c16cf161283b5f3d52eea65816f2e7dfeb4ec191",
	                    "EndpointID": "c502bf347220d543d3dcc62fde9abce756967f8038246c4b47be420a228be076",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-869293",
	                        "05e1fe12904b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-869293 -n newest-cni-869293
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-869293 -n newest-cni-869293: exit status 6 (349.485519ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1222 01:36:51.103246 1683210 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-869293" does not appear in /home/jenkins/minikube-integration/22179-1395000/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/FirstStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-869293 logs -n 25
helpers_test.go:261: TestStartStop/group/newest-cni/serial/FirstStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───
──────────────────┐
	│ COMMAND │                                                                                                                           ARGS                                                                                                                           │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───
──────────────────┤
	│ unpause │ -p old-k8s-version-433815 --alsologtostderr -v=1                                                                                                                                                                                                         │ old-k8s-version-433815       │ jenkins │ v1.37.0 │ 22 Dec 25 01:25 UTC │ 22 Dec 25 01:25 UTC │
	│ delete  │ -p old-k8s-version-433815                                                                                                                                                                                                                                │ old-k8s-version-433815       │ jenkins │ v1.37.0 │ 22 Dec 25 01:25 UTC │ 22 Dec 25 01:25 UTC │
	│ delete  │ -p old-k8s-version-433815                                                                                                                                                                                                                                │ old-k8s-version-433815       │ jenkins │ v1.37.0 │ 22 Dec 25 01:25 UTC │ 22 Dec 25 01:25 UTC │
	│ start   │ -p embed-certs-980842 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                                             │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:25 UTC │ 22 Dec 25 01:26 UTC │
	│ image   │ default-k8s-diff-port-778490 image list --format=json                                                                                                                                                                                                    │ default-k8s-diff-port-778490 │ jenkins │ v1.37.0 │ 22 Dec 25 01:25 UTC │ 22 Dec 25 01:25 UTC │
	│ pause   │ -p default-k8s-diff-port-778490 --alsologtostderr -v=1                                                                                                                                                                                                   │ default-k8s-diff-port-778490 │ jenkins │ v1.37.0 │ 22 Dec 25 01:25 UTC │ 22 Dec 25 01:25 UTC │
	│ unpause │ -p default-k8s-diff-port-778490 --alsologtostderr -v=1                                                                                                                                                                                                   │ default-k8s-diff-port-778490 │ jenkins │ v1.37.0 │ 22 Dec 25 01:25 UTC │ 22 Dec 25 01:25 UTC │
	│ delete  │ -p default-k8s-diff-port-778490                                                                                                                                                                                                                          │ default-k8s-diff-port-778490 │ jenkins │ v1.37.0 │ 22 Dec 25 01:25 UTC │ 22 Dec 25 01:26 UTC │
	│ delete  │ -p default-k8s-diff-port-778490                                                                                                                                                                                                                          │ default-k8s-diff-port-778490 │ jenkins │ v1.37.0 │ 22 Dec 25 01:26 UTC │ 22 Dec 25 01:26 UTC │
	│ delete  │ -p disable-driver-mounts-459348                                                                                                                                                                                                                          │ disable-driver-mounts-459348 │ jenkins │ v1.37.0 │ 22 Dec 25 01:26 UTC │ 22 Dec 25 01:26 UTC │
	│ start   │ -p no-preload-154186 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-154186            │ jenkins │ v1.37.0 │ 22 Dec 25 01:26 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-980842 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                 │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:27 UTC │ 22 Dec 25 01:27 UTC │
	│ stop    │ -p embed-certs-980842 --alsologtostderr -v=3                                                                                                                                                                                                             │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:27 UTC │ 22 Dec 25 01:27 UTC │
	│ addons  │ enable dashboard -p embed-certs-980842 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                            │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:27 UTC │ 22 Dec 25 01:27 UTC │
	│ start   │ -p embed-certs-980842 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                                             │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:27 UTC │ 22 Dec 25 01:28 UTC │
	│ image   │ embed-certs-980842 image list --format=json                                                                                                                                                                                                              │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:28 UTC │ 22 Dec 25 01:28 UTC │
	│ pause   │ -p embed-certs-980842 --alsologtostderr -v=1                                                                                                                                                                                                             │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:28 UTC │ 22 Dec 25 01:28 UTC │
	│ unpause │ -p embed-certs-980842 --alsologtostderr -v=1                                                                                                                                                                                                             │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:28 UTC │ 22 Dec 25 01:28 UTC │
	│ delete  │ -p embed-certs-980842                                                                                                                                                                                                                                    │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:28 UTC │ 22 Dec 25 01:28 UTC │
	│ delete  │ -p embed-certs-980842                                                                                                                                                                                                                                    │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:28 UTC │ 22 Dec 25 01:28 UTC │
	│ start   │ -p newest-cni-869293 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ newest-cni-869293            │ jenkins │ v1.37.0 │ 22 Dec 25 01:28 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-154186 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                  │ no-preload-154186            │ jenkins │ v1.37.0 │ 22 Dec 25 01:34 UTC │                     │
	│ stop    │ -p no-preload-154186 --alsologtostderr -v=3                                                                                                                                                                                                              │ no-preload-154186            │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │ 22 Dec 25 01:36 UTC │
	│ addons  │ enable dashboard -p no-preload-154186 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                             │ no-preload-154186            │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │ 22 Dec 25 01:36 UTC │
	│ start   │ -p no-preload-154186 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-154186            │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───
──────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/22 01:36:23
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1222 01:36:23.234823 1681323 out.go:360] Setting OutFile to fd 1 ...
	I1222 01:36:23.235027 1681323 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:36:23.235049 1681323 out.go:374] Setting ErrFile to fd 2...
	I1222 01:36:23.235067 1681323 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:36:23.235421 1681323 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1395000/.minikube/bin
	I1222 01:36:23.235901 1681323 out.go:368] Setting JSON to false
	I1222 01:36:23.237129 1681323 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":116336,"bootTime":1766251047,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1222 01:36:23.237207 1681323 start.go:143] virtualization:  
	I1222 01:36:23.240218 1681323 out.go:179] * [no-preload-154186] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1222 01:36:23.244197 1681323 out.go:179]   - MINIKUBE_LOCATION=22179
	I1222 01:36:23.244262 1681323 notify.go:221] Checking for updates...
	I1222 01:36:23.247308 1681323 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 01:36:23.251437 1681323 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-1395000/kubeconfig
	I1222 01:36:23.254483 1681323 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1395000/.minikube
	I1222 01:36:23.257414 1681323 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1222 01:36:23.260441 1681323 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 01:36:23.264003 1681323 config.go:182] Loaded profile config "no-preload-154186": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1222 01:36:23.264837 1681323 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 01:36:23.295171 1681323 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1222 01:36:23.295305 1681323 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:36:23.350537 1681323 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 01:36:23.341149383 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:36:23.350648 1681323 docker.go:319] overlay module found
	I1222 01:36:23.353847 1681323 out.go:179] * Using the docker driver based on existing profile
	I1222 01:36:23.356749 1681323 start.go:309] selected driver: docker
	I1222 01:36:23.356777 1681323 start.go:928] validating driver "docker" against &{Name:no-preload-154186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-154186 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26214
4 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:36:23.356883 1681323 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 01:36:23.357613 1681323 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:36:23.417124 1681323 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 01:36:23.408084515 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:36:23.417456 1681323 start_flags.go:995] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1222 01:36:23.417483 1681323 cni.go:84] Creating CNI manager for ""
	I1222 01:36:23.417540 1681323 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1222 01:36:23.417589 1681323 start.go:353] cluster config:
	{Name:no-preload-154186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-154186 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:36:23.420666 1681323 out.go:179] * Starting "no-preload-154186" primary control-plane node in "no-preload-154186" cluster
	I1222 01:36:23.423625 1681323 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1222 01:36:23.426661 1681323 out.go:179] * Pulling base image v0.0.48-1766219634-22260 ...
	I1222 01:36:23.429673 1681323 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1222 01:36:23.429835 1681323 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/no-preload-154186/config.json ...
	I1222 01:36:23.430154 1681323 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1222 01:36:23.430238 1681323 cache.go:107] acquiring lock: {Name:mk3bde21e751b3aa3caf7a41c8a37e36cec6e7cf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:36:23.430340 1681323 cache.go:115] /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1222 01:36:23.430349 1681323 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 122.997µs
	I1222 01:36:23.430379 1681323 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1222 01:36:23.430401 1681323 cache.go:107] acquiring lock: {Name:mk4a15c8225bf94a78b514d4142ea41c6bb91faa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:36:23.430458 1681323 cache.go:115] /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 exists
	I1222 01:36:23.430472 1681323 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1" took 72.633µs
	I1222 01:36:23.430491 1681323 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 succeeded
	I1222 01:36:23.430523 1681323 cache.go:107] acquiring lock: {Name:mkeb24b7f997eb1a1a3d59e2a2d68597fffc7c36 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:36:23.430589 1681323 cache.go:115] /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 exists
	I1222 01:36:23.430602 1681323 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1" took 94.27µs
	I1222 01:36:23.430610 1681323 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 succeeded
	I1222 01:36:23.430636 1681323 cache.go:107] acquiring lock: {Name:mkf2939c17635a47347d3721871a718b69a7a19c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:36:23.430687 1681323 cache.go:115] /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 exists
	I1222 01:36:23.430709 1681323 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1" took 74.782µs
	I1222 01:36:23.430717 1681323 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 succeeded
	I1222 01:36:23.430735 1681323 cache.go:107] acquiring lock: {Name:mk1daf2f1163a462fd1f82e12b9d4b157cffc772 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:36:23.430785 1681323 cache.go:115] /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 exists
	I1222 01:36:23.430802 1681323 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1" took 62.638µs
	I1222 01:36:23.430824 1681323 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 succeeded
	I1222 01:36:23.430840 1681323 cache.go:107] acquiring lock: {Name:mk48171dacff6bbfb8016f0e5908022e81e1ea85 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:36:23.430924 1681323 cache.go:115] /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1222 01:36:23.430937 1681323 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 103.344µs
	I1222 01:36:23.430969 1681323 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1222 01:36:23.431003 1681323 cache.go:107] acquiring lock: {Name:mkc08548a3ab9782a3dcbbb4e211790535cb9d14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:36:23.431057 1681323 cache.go:115] /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 exists
	I1222 01:36:23.431070 1681323 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0" took 69.399µs
	I1222 01:36:23.431089 1681323 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1222 01:36:23.431107 1681323 cache.go:107] acquiring lock: {Name:mk2f653a9914a185aaa3299c67a548da6098dcf3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:36:23.431143 1681323 cache.go:115] /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1222 01:36:23.431164 1681323 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 48.804µs
	I1222 01:36:23.431176 1681323 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1222 01:36:23.431183 1681323 cache.go:87] Successfully saved all images to host disk.
	I1222 01:36:23.450810 1681323 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon, skipping pull
	I1222 01:36:23.450833 1681323 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 exists in daemon, skipping load
	I1222 01:36:23.450848 1681323 cache.go:243] Successfully downloaded all kic artifacts
	I1222 01:36:23.450878 1681323 start.go:360] acquireMachinesLock for no-preload-154186: {Name:mk9dee4f9b1c44d5e40729915965cd9e314df88f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:36:23.450936 1681323 start.go:364] duration metric: took 37.506µs to acquireMachinesLock for "no-preload-154186"
	I1222 01:36:23.450961 1681323 start.go:96] Skipping create...Using existing machine configuration
	I1222 01:36:23.450970 1681323 fix.go:54] fixHost starting: 
	I1222 01:36:23.451228 1681323 cli_runner.go:164] Run: docker container inspect no-preload-154186 --format={{.State.Status}}
	I1222 01:36:23.468570 1681323 fix.go:112] recreateIfNeeded on no-preload-154186: state=Stopped err=<nil>
	W1222 01:36:23.468607 1681323 fix.go:138] unexpected machine state, will restart: <nil>
	I1222 01:36:23.472031 1681323 out.go:252] * Restarting existing docker container for "no-preload-154186" ...
	I1222 01:36:23.472128 1681323 cli_runner.go:164] Run: docker start no-preload-154186
	I1222 01:36:23.751686 1681323 cli_runner.go:164] Run: docker container inspect no-preload-154186 --format={{.State.Status}}
	I1222 01:36:23.775088 1681323 kic.go:430] container "no-preload-154186" state is running.
	I1222 01:36:23.775522 1681323 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-154186
	I1222 01:36:23.804788 1681323 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/no-preload-154186/config.json ...
	I1222 01:36:23.805037 1681323 machine.go:94] provisionDockerMachine start ...
	I1222 01:36:23.805105 1681323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-154186
	I1222 01:36:23.831796 1681323 main.go:144] libmachine: Using SSH client type: native
	I1222 01:36:23.832139 1681323 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38702 <nil> <nil>}
	I1222 01:36:23.832149 1681323 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 01:36:23.834213 1681323 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1222 01:36:26.965689 1681323 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-154186
	
	I1222 01:36:26.965717 1681323 ubuntu.go:182] provisioning hostname "no-preload-154186"
	I1222 01:36:26.965785 1681323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-154186
	I1222 01:36:26.985217 1681323 main.go:144] libmachine: Using SSH client type: native
	I1222 01:36:26.985542 1681323 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38702 <nil> <nil>}
	I1222 01:36:26.985560 1681323 main.go:144] libmachine: About to run SSH command:
	sudo hostname no-preload-154186 && echo "no-preload-154186" | sudo tee /etc/hostname
	I1222 01:36:27.127502 1681323 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-154186
	
	I1222 01:36:27.127590 1681323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-154186
	I1222 01:36:27.145587 1681323 main.go:144] libmachine: Using SSH client type: native
	I1222 01:36:27.145900 1681323 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38702 <nil> <nil>}
	I1222 01:36:27.145916 1681323 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-154186' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-154186/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-154186' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1222 01:36:27.278718 1681323 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 01:36:27.278747 1681323 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22179-1395000/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-1395000/.minikube}
	I1222 01:36:27.278768 1681323 ubuntu.go:190] setting up certificates
	I1222 01:36:27.278786 1681323 provision.go:84] configureAuth start
	I1222 01:36:27.278873 1681323 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-154186
	I1222 01:36:27.301231 1681323 provision.go:143] copyHostCerts
	I1222 01:36:27.301308 1681323 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.pem, removing ...
	I1222 01:36:27.301328 1681323 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.pem
	I1222 01:36:27.301409 1681323 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.pem (1082 bytes)
	I1222 01:36:27.301556 1681323 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1395000/.minikube/cert.pem, removing ...
	I1222 01:36:27.301569 1681323 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1395000/.minikube/cert.pem
	I1222 01:36:27.301598 1681323 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-1395000/.minikube/cert.pem (1123 bytes)
	I1222 01:36:27.301659 1681323 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1395000/.minikube/key.pem, removing ...
	I1222 01:36:27.301669 1681323 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1395000/.minikube/key.pem
	I1222 01:36:27.301695 1681323 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-1395000/.minikube/key.pem (1679 bytes)
	I1222 01:36:27.301746 1681323 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca-key.pem org=jenkins.no-preload-154186 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-154186]
	I1222 01:36:27.754512 1681323 provision.go:177] copyRemoteCerts
	I1222 01:36:27.754594 1681323 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1222 01:36:27.754648 1681323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-154186
	I1222 01:36:27.772550 1681323 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38702 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/no-preload-154186/id_rsa Username:docker}
	I1222 01:36:27.874202 1681323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1222 01:36:27.892571 1681323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1222 01:36:27.911007 1681323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1222 01:36:27.928834 1681323 provision.go:87] duration metric: took 650.003977ms to configureAuth
	I1222 01:36:27.928863 1681323 ubuntu.go:206] setting minikube options for container-runtime
	I1222 01:36:27.929086 1681323 config.go:182] Loaded profile config "no-preload-154186": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1222 01:36:27.929099 1681323 machine.go:97] duration metric: took 4.124054244s to provisionDockerMachine
	I1222 01:36:27.929107 1681323 start.go:293] postStartSetup for "no-preload-154186" (driver="docker")
	I1222 01:36:27.929119 1681323 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1222 01:36:27.929165 1681323 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1222 01:36:27.929208 1681323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-154186
	I1222 01:36:27.946963 1681323 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38702 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/no-preload-154186/id_rsa Username:docker}
	I1222 01:36:28.042660 1681323 ssh_runner.go:195] Run: cat /etc/os-release
	I1222 01:36:28.046171 1681323 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1222 01:36:28.046204 1681323 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1222 01:36:28.046222 1681323 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1395000/.minikube/addons for local assets ...
	I1222 01:36:28.046287 1681323 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1395000/.minikube/files for local assets ...
	I1222 01:36:28.046377 1681323 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem -> 13968642.pem in /etc/ssl/certs
	I1222 01:36:28.046485 1681323 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1222 01:36:28.054291 1681323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem --> /etc/ssl/certs/13968642.pem (1708 bytes)
	I1222 01:36:28.073024 1681323 start.go:296] duration metric: took 143.901056ms for postStartSetup
	I1222 01:36:28.073108 1681323 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 01:36:28.073167 1681323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-154186
	I1222 01:36:28.091267 1681323 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38702 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/no-preload-154186/id_rsa Username:docker}
	I1222 01:36:28.183597 1681323 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1222 01:36:28.188658 1681323 fix.go:56] duration metric: took 4.737681885s for fixHost
	I1222 01:36:28.188687 1681323 start.go:83] releasing machines lock for "no-preload-154186", held for 4.737736532s
	I1222 01:36:28.188793 1681323 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-154186
	I1222 01:36:28.206039 1681323 ssh_runner.go:195] Run: cat /version.json
	I1222 01:36:28.206158 1681323 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1222 01:36:28.206221 1681323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-154186
	I1222 01:36:28.206378 1681323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-154186
	I1222 01:36:28.224770 1681323 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38702 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/no-preload-154186/id_rsa Username:docker}
	I1222 01:36:28.230258 1681323 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38702 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/no-preload-154186/id_rsa Username:docker}
	I1222 01:36:28.414932 1681323 ssh_runner.go:195] Run: systemctl --version
	I1222 01:36:28.421366 1681323 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1222 01:36:28.425653 1681323 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1222 01:36:28.425721 1681323 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1222 01:36:28.433525 1681323 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1222 01:36:28.433549 1681323 start.go:496] detecting cgroup driver to use...
	I1222 01:36:28.433582 1681323 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 01:36:28.433651 1681323 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1222 01:36:28.451333 1681323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1222 01:36:28.464888 1681323 docker.go:218] disabling cri-docker service (if available) ...
	I1222 01:36:28.464974 1681323 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1222 01:36:28.480732 1681323 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1222 01:36:28.494042 1681323 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1222 01:36:28.611667 1681323 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1222 01:36:28.731604 1681323 docker.go:234] disabling docker service ...
	I1222 01:36:28.731674 1681323 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1222 01:36:28.747773 1681323 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1222 01:36:28.761732 1681323 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1222 01:36:28.883133 1681323 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1222 01:36:29.013965 1681323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1222 01:36:29.029996 1681323 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 01:36:29.046133 1681323 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1222 01:36:29.056270 1681323 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1222 01:36:29.066036 1681323 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1222 01:36:29.066163 1681323 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1222 01:36:29.075930 1681323 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1222 01:36:29.084710 1681323 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1222 01:36:29.093653 1681323 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1222 01:36:29.102647 1681323 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1222 01:36:29.110826 1681323 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1222 01:36:29.119665 1681323 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1222 01:36:29.128698 1681323 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1222 01:36:29.137543 1681323 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1222 01:36:29.145415 1681323 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1222 01:36:29.153357 1681323 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:36:29.268778 1681323 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1222 01:36:29.366806 1681323 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1222 01:36:29.366878 1681323 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1222 01:36:29.370821 1681323 start.go:564] Will wait 60s for crictl version
	I1222 01:36:29.370889 1681323 ssh_runner.go:195] Run: which crictl
	I1222 01:36:29.374398 1681323 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1222 01:36:29.401622 1681323 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.1
	RuntimeApiVersion:  v1
	I1222 01:36:29.401693 1681323 ssh_runner.go:195] Run: containerd --version
	I1222 01:36:29.425502 1681323 ssh_runner.go:195] Run: containerd --version
	I1222 01:36:29.452207 1681323 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.1 ...
	I1222 01:36:29.455184 1681323 cli_runner.go:164] Run: docker network inspect no-preload-154186 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 01:36:29.471412 1681323 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1222 01:36:29.475195 1681323 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 01:36:29.484943 1681323 kubeadm.go:884] updating cluster {Name:no-preload-154186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-154186 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1222 01:36:29.485070 1681323 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1222 01:36:29.485129 1681323 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 01:36:29.515771 1681323 containerd.go:627] all images are preloaded for containerd runtime.
	I1222 01:36:29.515798 1681323 cache_images.go:86] Images are preloaded, skipping loading
	I1222 01:36:29.515812 1681323 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-rc.1 containerd true true} ...
	I1222 01:36:29.515907 1681323 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-154186 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-154186 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1222 01:36:29.515977 1681323 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
	I1222 01:36:29.544359 1681323 cni.go:84] Creating CNI manager for ""
	I1222 01:36:29.544384 1681323 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1222 01:36:29.544401 1681323 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1222 01:36:29.544424 1681323 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-154186 NodeName:no-preload-154186 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1222 01:36:29.544539 1681323 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-154186"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1222 01:36:29.544615 1681323 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1222 01:36:29.552325 1681323 binaries.go:51] Found k8s binaries, skipping transfer
	I1222 01:36:29.552411 1681323 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1222 01:36:29.560003 1681323 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1222 01:36:29.572789 1681323 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1222 01:36:29.585517 1681323 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1222 01:36:29.599349 1681323 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1222 01:36:29.603106 1681323 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 01:36:29.612969 1681323 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:36:29.733862 1681323 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 01:36:29.752522 1681323 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/no-preload-154186 for IP: 192.168.85.2
	I1222 01:36:29.752545 1681323 certs.go:195] generating shared ca certs ...
	I1222 01:36:29.752562 1681323 certs.go:227] acquiring lock for ca certs: {Name:mk4e1172e73c8d9b926824a39d7e920772302ed7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:36:29.752701 1681323 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.key
	I1222 01:36:29.752747 1681323 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.key
	I1222 01:36:29.752758 1681323 certs.go:257] generating profile certs ...
	I1222 01:36:29.752867 1681323 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/no-preload-154186/client.key
	I1222 01:36:29.752925 1681323 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/no-preload-154186/apiserver.key.e54c24a5
	I1222 01:36:29.752976 1681323 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/no-preload-154186/proxy-client.key
	I1222 01:36:29.753099 1681323 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/1396864.pem (1338 bytes)
	W1222 01:36:29.753135 1681323 certs.go:480] ignoring /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/1396864_empty.pem, impossibly tiny 0 bytes
	I1222 01:36:29.753147 1681323 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca-key.pem (1675 bytes)
	I1222 01:36:29.753174 1681323 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem (1082 bytes)
	I1222 01:36:29.753203 1681323 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem (1123 bytes)
	I1222 01:36:29.753232 1681323 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/key.pem (1679 bytes)
	I1222 01:36:29.753285 1681323 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem (1708 bytes)
	I1222 01:36:29.753910 1681323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1222 01:36:29.782071 1681323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1222 01:36:29.803383 1681323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1222 01:36:29.824019 1681323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1222 01:36:29.845035 1681323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/no-preload-154186/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1222 01:36:29.866115 1681323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/no-preload-154186/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1222 01:36:29.883918 1681323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/no-preload-154186/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1222 01:36:29.900943 1681323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/no-preload-154186/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1222 01:36:29.918714 1681323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1222 01:36:29.936559 1681323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/1396864.pem --> /usr/share/ca-certificates/1396864.pem (1338 bytes)
	I1222 01:36:29.954160 1681323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem --> /usr/share/ca-certificates/13968642.pem (1708 bytes)
	I1222 01:36:29.972189 1681323 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1222 01:36:29.985434 1681323 ssh_runner.go:195] Run: openssl version
	I1222 01:36:29.992444 1681323 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1396864.pem
	I1222 01:36:30.000140 1681323 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1396864.pem /etc/ssl/certs/1396864.pem
	I1222 01:36:30.014964 1681323 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1396864.pem
	I1222 01:36:30.043109 1681323 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 00:13 /usr/share/ca-certificates/1396864.pem
	I1222 01:36:30.043223 1681323 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1396864.pem
	I1222 01:36:30.108760 1681323 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1222 01:36:30.118305 1681323 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/13968642.pem
	I1222 01:36:30.127792 1681323 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/13968642.pem /etc/ssl/certs/13968642.pem
	I1222 01:36:30.136802 1681323 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13968642.pem
	I1222 01:36:30.141548 1681323 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 00:13 /usr/share/ca-certificates/13968642.pem
	I1222 01:36:30.141643 1681323 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13968642.pem
	I1222 01:36:30.184623 1681323 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1222 01:36:30.193382 1681323 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:36:30.201724 1681323 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1222 01:36:30.210242 1681323 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:36:30.214881 1681323 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 00:04 /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:36:30.214969 1681323 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:36:30.256748 1681323 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1222 01:36:30.264842 1681323 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 01:36:30.268912 1681323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1222 01:36:30.310683 1681323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1222 01:36:30.352386 1681323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1222 01:36:30.393519 1681323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1222 01:36:30.434377 1681323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1222 01:36:30.475355 1681323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1222 01:36:30.540692 1681323 kubeadm.go:401] StartCluster: {Name:no-preload-154186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-154186 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:36:30.540782 1681323 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1222 01:36:30.540866 1681323 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 01:36:30.575229 1681323 cri.go:96] found id: ""
	I1222 01:36:30.575312 1681323 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1222 01:36:30.584220 1681323 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1222 01:36:30.584293 1681323 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1222 01:36:30.584391 1681323 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1222 01:36:30.594816 1681323 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1222 01:36:30.595221 1681323 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-154186" does not appear in /home/jenkins/minikube-integration/22179-1395000/kubeconfig
	I1222 01:36:30.595322 1681323 kubeconfig.go:62] /home/jenkins/minikube-integration/22179-1395000/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-154186" cluster setting kubeconfig missing "no-preload-154186" context setting]
	I1222 01:36:30.595620 1681323 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/kubeconfig: {Name:mk08a9ce05fb3d2aec66c2d4776a38c1f64c42df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:36:30.596925 1681323 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1222 01:36:30.604842 1681323 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1222 01:36:30.604873 1681323 kubeadm.go:602] duration metric: took 20.560605ms to restartPrimaryControlPlane
	I1222 01:36:30.604883 1681323 kubeadm.go:403] duration metric: took 64.203267ms to StartCluster
	I1222 01:36:30.604898 1681323 settings.go:142] acquiring lock: {Name:mk10fbc51e5384d725fd46ab36048358281cb87b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:36:30.604963 1681323 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22179-1395000/kubeconfig
	I1222 01:36:30.605576 1681323 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/kubeconfig: {Name:mk08a9ce05fb3d2aec66c2d4776a38c1f64c42df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:36:30.605779 1681323 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1222 01:36:30.606072 1681323 config.go:182] Loaded profile config "no-preload-154186": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1222 01:36:30.606145 1681323 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1222 01:36:30.606208 1681323 addons.go:70] Setting storage-provisioner=true in profile "no-preload-154186"
	I1222 01:36:30.606221 1681323 addons.go:239] Setting addon storage-provisioner=true in "no-preload-154186"
	I1222 01:36:30.606247 1681323 host.go:66] Checking if "no-preload-154186" exists ...
	I1222 01:36:30.606455 1681323 addons.go:70] Setting dashboard=true in profile "no-preload-154186"
	I1222 01:36:30.606480 1681323 addons.go:239] Setting addon dashboard=true in "no-preload-154186"
	W1222 01:36:30.606487 1681323 addons.go:248] addon dashboard should already be in state true
	I1222 01:36:30.606508 1681323 host.go:66] Checking if "no-preload-154186" exists ...
	I1222 01:36:30.606709 1681323 cli_runner.go:164] Run: docker container inspect no-preload-154186 --format={{.State.Status}}
	I1222 01:36:30.606923 1681323 cli_runner.go:164] Run: docker container inspect no-preload-154186 --format={{.State.Status}}
	I1222 01:36:30.609388 1681323 addons.go:70] Setting default-storageclass=true in profile "no-preload-154186"
	I1222 01:36:30.609534 1681323 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-154186"
	I1222 01:36:30.610760 1681323 cli_runner.go:164] Run: docker container inspect no-preload-154186 --format={{.State.Status}}
	I1222 01:36:30.611641 1681323 out.go:179] * Verifying Kubernetes components...
	I1222 01:36:30.614570 1681323 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:36:30.635770 1681323 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1222 01:36:30.638688 1681323 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 01:36:30.638712 1681323 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1222 01:36:30.638781 1681323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-154186
	I1222 01:36:30.665572 1681323 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1222 01:36:30.669104 1681323 addons.go:239] Setting addon default-storageclass=true in "no-preload-154186"
	I1222 01:36:30.669154 1681323 host.go:66] Checking if "no-preload-154186" exists ...
	I1222 01:36:30.669590 1681323 cli_runner.go:164] Run: docker container inspect no-preload-154186 --format={{.State.Status}}
	I1222 01:36:30.683959 1681323 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1222 01:36:30.687520 1681323 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1222 01:36:30.687549 1681323 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1222 01:36:30.687626 1681323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-154186
	I1222 01:36:30.694403 1681323 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38702 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/no-preload-154186/id_rsa Username:docker}
	I1222 01:36:30.702255 1681323 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1222 01:36:30.702278 1681323 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1222 01:36:30.702352 1681323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-154186
	I1222 01:36:30.734213 1681323 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38702 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/no-preload-154186/id_rsa Username:docker}
	I1222 01:36:30.746998 1681323 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38702 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/no-preload-154186/id_rsa Username:docker}
	I1222 01:36:30.831368 1681323 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 01:36:30.859591 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 01:36:30.874776 1681323 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1222 01:36:30.874854 1681323 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1222 01:36:30.886571 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1222 01:36:30.896408 1681323 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1222 01:36:30.896491 1681323 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1222 01:36:30.935406 1681323 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1222 01:36:30.935480 1681323 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1222 01:36:30.980952 1681323 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1222 01:36:30.980974 1681323 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1222 01:36:30.995662 1681323 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1222 01:36:30.995686 1681323 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1222 01:36:31.011181 1681323 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1222 01:36:31.011207 1681323 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1222 01:36:31.025817 1681323 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1222 01:36:31.025897 1681323 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1222 01:36:31.040425 1681323 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1222 01:36:31.040451 1681323 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1222 01:36:31.053847 1681323 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1222 01:36:31.053877 1681323 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1222 01:36:31.068203 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1222 01:36:31.247341 1681323 node_ready.go:35] waiting up to 6m0s for node "no-preload-154186" to be "Ready" ...
	W1222 01:36:31.247593 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:36:31.247631 1681323 retry.go:84] will retry after 300ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:36:31.247506 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:36:31.247875 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:36:31.447386 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:36:31.513087 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:36:31.526286 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 01:36:31.572817 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:36:31.587392 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:36:31.642567 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:36:31.848456 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:36:31.905960 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:36:32.073298 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:36:32.132496 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:36:32.203849 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:36:32.270984 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:36:32.342424 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:36:32.407926 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:36:32.631301 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:36:32.690048 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:36:32.962216 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:36:33.025131 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:36:33.248294 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:36:33.408735 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:36:33.475146 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:36:33.554384 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1222 01:36:33.564336 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:36:33.639233 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:36:33.648250 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:36:34.651164 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:36:34.715118 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:36:34.728333 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1222 01:36:34.766376 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:36:34.808648 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:36:34.845664 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:36:35.248568 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:36:36.090694 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:36:36.156793 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:36:36.271773 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:36:36.333746 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:36:36.520979 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:36:36.615878 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:36:37.748720 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:36:38.179203 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:36:38.240963 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:36:38.510984 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:36:38.571044 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:36:39.749000 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:36:40.373298 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:36:40.435360 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:36:40.473700 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:36:40.535044 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:36:41.925479 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:36:41.983644 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:36:42.248915 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:36:44.748032 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:36:45.973776 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:36:46.041089 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:36:46.041130 1681323 retry.go:84] will retry after 8s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:36:46.497259 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:36:46.561444 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:36:46.749030 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:36:47.516501 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:36:47.578863 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:36:50.255531 1670843 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000628133s
	I1222 01:36:50.255560 1670843 kubeadm.go:319] 
	I1222 01:36:50.255614 1670843 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1222 01:36:50.255656 1670843 kubeadm.go:319] 	- The kubelet is not running
	I1222 01:36:50.255761 1670843 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1222 01:36:50.255770 1670843 kubeadm.go:319] 
	I1222 01:36:50.255874 1670843 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1222 01:36:50.255911 1670843 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1222 01:36:50.255945 1670843 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1222 01:36:50.255956 1670843 kubeadm.go:319] 
	I1222 01:36:50.263125 1670843 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1222 01:36:50.263638 1670843 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1222 01:36:50.263757 1670843 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1222 01:36:50.264047 1670843 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1222 01:36:50.264076 1670843 kubeadm.go:319] 
	I1222 01:36:50.264231 1670843 kubeadm.go:403] duration metric: took 8m5.963674476s to StartCluster
	I1222 01:36:50.264284 1670843 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:36:50.264365 1670843 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:36:50.264448 1670843 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1222 01:36:50.290175 1670843 cri.go:96] found id: ""
	I1222 01:36:50.290209 1670843 logs.go:282] 0 containers: []
	W1222 01:36:50.290218 1670843 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:36:50.290226 1670843 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:36:50.290294 1670843 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:36:50.322949 1670843 cri.go:96] found id: ""
	I1222 01:36:50.322982 1670843 logs.go:282] 0 containers: []
	W1222 01:36:50.322992 1670843 logs.go:284] No container was found matching "etcd"
	I1222 01:36:50.322998 1670843 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:36:50.323057 1670843 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:36:50.348796 1670843 cri.go:96] found id: ""
	I1222 01:36:50.348823 1670843 logs.go:282] 0 containers: []
	W1222 01:36:50.348833 1670843 logs.go:284] No container was found matching "coredns"
	I1222 01:36:50.348839 1670843 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:36:50.348963 1670843 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:36:50.377272 1670843 cri.go:96] found id: ""
	I1222 01:36:50.377300 1670843 logs.go:282] 0 containers: []
	W1222 01:36:50.377309 1670843 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:36:50.377316 1670843 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:36:50.377378 1670843 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:36:50.402189 1670843 cri.go:96] found id: ""
	I1222 01:36:50.402213 1670843 logs.go:282] 0 containers: []
	W1222 01:36:50.402222 1670843 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:36:50.402228 1670843 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:36:50.402290 1670843 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:36:50.426628 1670843 cri.go:96] found id: ""
	I1222 01:36:50.426656 1670843 logs.go:282] 0 containers: []
	W1222 01:36:50.426666 1670843 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:36:50.426674 1670843 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:36:50.426736 1670843 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:36:50.455040 1670843 cri.go:96] found id: ""
	I1222 01:36:50.455066 1670843 logs.go:282] 0 containers: []
	W1222 01:36:50.455076 1670843 logs.go:284] No container was found matching "kindnet"
	I1222 01:36:50.455086 1670843 logs.go:123] Gathering logs for kubelet ...
	I1222 01:36:50.455098 1670843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:36:50.511757 1670843 logs.go:123] Gathering logs for dmesg ...
	I1222 01:36:50.511795 1670843 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:36:50.527148 1670843 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:36:50.527182 1670843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:36:50.590231 1670843 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:36:50.581376    4829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:36:50.582223    4829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:36:50.583755    4829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:36:50.584210    4829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:36:50.585755    4829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:36:50.581376    4829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:36:50.582223    4829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:36:50.583755    4829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:36:50.584210    4829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:36:50.585755    4829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:36:50.590258 1670843 logs.go:123] Gathering logs for containerd ...
	I1222 01:36:50.590270 1670843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:36:50.629028 1670843 logs.go:123] Gathering logs for container status ...
	I1222 01:36:50.629065 1670843 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 01:36:50.657217 1670843 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000628133s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1222 01:36:50.657266 1670843 out.go:285] * 
	W1222 01:36:50.657314 1670843 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000628133s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1222 01:36:50.657331 1670843 out.go:285] * 
	W1222 01:36:50.659448 1670843 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1222 01:36:50.664218 1670843 out.go:203] 
	W1222 01:36:50.668072 1670843 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000628133s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1222 01:36:50.668130 1670843 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1222 01:36:50.668162 1670843 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1222 01:36:50.671361 1670843 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 22 01:28:42 newest-cni-869293 containerd[758]: time="2025-12-22T01:28:42.072209044Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 22 01:28:42 newest-cni-869293 containerd[758]: time="2025-12-22T01:28:42.072223001Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 22 01:28:42 newest-cni-869293 containerd[758]: time="2025-12-22T01:28:42.072264175Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 22 01:28:42 newest-cni-869293 containerd[758]: time="2025-12-22T01:28:42.072281972Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 22 01:28:42 newest-cni-869293 containerd[758]: time="2025-12-22T01:28:42.072292934Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 22 01:28:42 newest-cni-869293 containerd[758]: time="2025-12-22T01:28:42.072316770Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 22 01:28:42 newest-cni-869293 containerd[758]: time="2025-12-22T01:28:42.072339031Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 22 01:28:42 newest-cni-869293 containerd[758]: time="2025-12-22T01:28:42.072355876Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 22 01:28:42 newest-cni-869293 containerd[758]: time="2025-12-22T01:28:42.072374076Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 22 01:28:42 newest-cni-869293 containerd[758]: time="2025-12-22T01:28:42.072409957Z" level=info msg="Connect containerd service"
	Dec 22 01:28:42 newest-cni-869293 containerd[758]: time="2025-12-22T01:28:42.072783704Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 22 01:28:42 newest-cni-869293 containerd[758]: time="2025-12-22T01:28:42.073465565Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 22 01:28:42 newest-cni-869293 containerd[758]: time="2025-12-22T01:28:42.089227568Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 22 01:28:42 newest-cni-869293 containerd[758]: time="2025-12-22T01:28:42.089296402Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 22 01:28:42 newest-cni-869293 containerd[758]: time="2025-12-22T01:28:42.089333145Z" level=info msg="Start subscribing containerd event"
	Dec 22 01:28:42 newest-cni-869293 containerd[758]: time="2025-12-22T01:28:42.089383992Z" level=info msg="Start recovering state"
	Dec 22 01:28:42 newest-cni-869293 containerd[758]: time="2025-12-22T01:28:42.142982119Z" level=info msg="Start event monitor"
	Dec 22 01:28:42 newest-cni-869293 containerd[758]: time="2025-12-22T01:28:42.143200181Z" level=info msg="Start cni network conf syncer for default"
	Dec 22 01:28:42 newest-cni-869293 containerd[758]: time="2025-12-22T01:28:42.143267168Z" level=info msg="Start streaming server"
	Dec 22 01:28:42 newest-cni-869293 containerd[758]: time="2025-12-22T01:28:42.143353618Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 22 01:28:42 newest-cni-869293 containerd[758]: time="2025-12-22T01:28:42.143430780Z" level=info msg="runtime interface starting up..."
	Dec 22 01:28:42 newest-cni-869293 containerd[758]: time="2025-12-22T01:28:42.143506456Z" level=info msg="starting plugins..."
	Dec 22 01:28:42 newest-cni-869293 containerd[758]: time="2025-12-22T01:28:42.143586498Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 22 01:28:42 newest-cni-869293 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 22 01:28:42 newest-cni-869293 containerd[758]: time="2025-12-22T01:28:42.147653548Z" level=info msg="containerd successfully booted in 0.098372s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:36:51.788823    4950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:36:51.789592    4950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:36:51.791209    4950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:36:51.791586    4950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:36:51.793136    4950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec22 00:03] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 01:36:51 up 1 day,  8:19,  0 user,  load average: 1.60, 1.25, 1.83
	Linux newest-cni-869293 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 22 01:36:48 newest-cni-869293 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 01:36:49 newest-cni-869293 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 318.
	Dec 22 01:36:49 newest-cni-869293 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:36:49 newest-cni-869293 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:36:49 newest-cni-869293 kubelet[4756]: E1222 01:36:49.293347    4756 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 01:36:49 newest-cni-869293 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 01:36:49 newest-cni-869293 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 01:36:49 newest-cni-869293 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 22 01:36:49 newest-cni-869293 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:36:49 newest-cni-869293 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:36:50 newest-cni-869293 kubelet[4761]: E1222 01:36:50.053491    4761 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 01:36:50 newest-cni-869293 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 01:36:50 newest-cni-869293 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 01:36:50 newest-cni-869293 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 22 01:36:50 newest-cni-869293 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:36:50 newest-cni-869293 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:36:50 newest-cni-869293 kubelet[4847]: E1222 01:36:50.822459    4847 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 01:36:50 newest-cni-869293 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 01:36:50 newest-cni-869293 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 01:36:51 newest-cni-869293 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 22 01:36:51 newest-cni-869293 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:36:51 newest-cni-869293 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:36:51 newest-cni-869293 kubelet[4896]: E1222 01:36:51.606223    4896 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 01:36:51 newest-cni-869293 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 01:36:51 newest-cni-869293 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-869293 -n newest-cni-869293
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-869293 -n newest-cni-869293: exit status 6 (393.151969ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1222 01:36:52.369568 1683429 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-869293" does not appear in /home/jenkins/minikube-integration/22179-1395000/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "newest-cni-869293" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (502.91s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (3.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-154186 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) Non-zero exit: kubectl --context no-preload-154186 create -f testdata/busybox.yaml: exit status 1 (51.459769ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-154186" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:194: kubectl --context no-preload-154186 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-154186
helpers_test.go:244: (dbg) docker inspect no-preload-154186:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "66e4ffc32ac4d9a593d1b758510a0770f54cbfe4fc8bd1c7d5d5625196ae8de7",
	        "Created": "2025-12-22T01:26:03.981102368Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1662042,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-22T01:26:04.182435984Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:065a636b8735485f57df2b02ed6532902f189a9c5dc304ae0ae68a778e1c9b2c",
	        "ResolvConfPath": "/var/lib/docker/containers/66e4ffc32ac4d9a593d1b758510a0770f54cbfe4fc8bd1c7d5d5625196ae8de7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/66e4ffc32ac4d9a593d1b758510a0770f54cbfe4fc8bd1c7d5d5625196ae8de7/hostname",
	        "HostsPath": "/var/lib/docker/containers/66e4ffc32ac4d9a593d1b758510a0770f54cbfe4fc8bd1c7d5d5625196ae8de7/hosts",
	        "LogPath": "/var/lib/docker/containers/66e4ffc32ac4d9a593d1b758510a0770f54cbfe4fc8bd1c7d5d5625196ae8de7/66e4ffc32ac4d9a593d1b758510a0770f54cbfe4fc8bd1c7d5d5625196ae8de7-json.log",
	        "Name": "/no-preload-154186",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-154186:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-154186",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "66e4ffc32ac4d9a593d1b758510a0770f54cbfe4fc8bd1c7d5d5625196ae8de7",
	                "LowerDir": "/var/lib/docker/overlay2/c8373d16a8e2d6ece6c54ed2fe561f951d426d306116bd1c2373e51241872490-init/diff:/var/lib/docker/overlay2/fdfc0dccbb3b6766b7981a5669907f62e120bbe774767ccbee34e6115374625f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c8373d16a8e2d6ece6c54ed2fe561f951d426d306116bd1c2373e51241872490/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c8373d16a8e2d6ece6c54ed2fe561f951d426d306116bd1c2373e51241872490/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c8373d16a8e2d6ece6c54ed2fe561f951d426d306116bd1c2373e51241872490/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-154186",
	                "Source": "/var/lib/docker/volumes/no-preload-154186/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-154186",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-154186",
	                "name.minikube.sigs.k8s.io": "no-preload-154186",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4c84f7a2a870d36246b7a801b7bf7055532e2138e424e145ab2b2ac49b81f1d2",
	            "SandboxKey": "/var/run/docker/netns/4c84f7a2a870",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38685"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38686"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38689"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38687"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38688"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-154186": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2e:f0:0c:13:3c:8d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "16496b4bf71c897a797bd98394f7b6821dd2bd1d65c81e9f1efd0fcd1f14d1a5",
	                    "EndpointID": "60d289b6dced5e2b95a24119998812f02b77a1cbd32a594dea6fb7ca62aa8c31",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-154186",
	                        "66e4ffc32ac4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-154186 -n no-preload-154186
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-154186 -n no-preload-154186: exit status 6 (338.063871ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1222 01:34:36.037983 1678490 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-154186" does not appear in /home/jenkins/minikube-integration/22179-1395000/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-154186 logs -n 25
helpers_test.go:261: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───
──────────────────┐
	│ COMMAND │                                                                                                                           ARGS                                                                                                                           │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───
──────────────────┤
	│ addons  │ enable dashboard -p default-k8s-diff-port-778490 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ default-k8s-diff-port-778490 │ jenkins │ v1.37.0 │ 22 Dec 25 01:24 UTC │ 22 Dec 25 01:24 UTC │
	│ start   │ -p default-k8s-diff-port-778490 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-778490 │ jenkins │ v1.37.0 │ 22 Dec 25 01:24 UTC │ 22 Dec 25 01:25 UTC │
	│ image   │ old-k8s-version-433815 image list --format=json                                                                                                                                                                                                          │ old-k8s-version-433815       │ jenkins │ v1.37.0 │ 22 Dec 25 01:25 UTC │ 22 Dec 25 01:25 UTC │
	│ pause   │ -p old-k8s-version-433815 --alsologtostderr -v=1                                                                                                                                                                                                         │ old-k8s-version-433815       │ jenkins │ v1.37.0 │ 22 Dec 25 01:25 UTC │ 22 Dec 25 01:25 UTC │
	│ unpause │ -p old-k8s-version-433815 --alsologtostderr -v=1                                                                                                                                                                                                         │ old-k8s-version-433815       │ jenkins │ v1.37.0 │ 22 Dec 25 01:25 UTC │ 22 Dec 25 01:25 UTC │
	│ delete  │ -p old-k8s-version-433815                                                                                                                                                                                                                                │ old-k8s-version-433815       │ jenkins │ v1.37.0 │ 22 Dec 25 01:25 UTC │ 22 Dec 25 01:25 UTC │
	│ delete  │ -p old-k8s-version-433815                                                                                                                                                                                                                                │ old-k8s-version-433815       │ jenkins │ v1.37.0 │ 22 Dec 25 01:25 UTC │ 22 Dec 25 01:25 UTC │
	│ start   │ -p embed-certs-980842 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                                             │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:25 UTC │ 22 Dec 25 01:26 UTC │
	│ image   │ default-k8s-diff-port-778490 image list --format=json                                                                                                                                                                                                    │ default-k8s-diff-port-778490 │ jenkins │ v1.37.0 │ 22 Dec 25 01:25 UTC │ 22 Dec 25 01:25 UTC │
	│ pause   │ -p default-k8s-diff-port-778490 --alsologtostderr -v=1                                                                                                                                                                                                   │ default-k8s-diff-port-778490 │ jenkins │ v1.37.0 │ 22 Dec 25 01:25 UTC │ 22 Dec 25 01:25 UTC │
	│ unpause │ -p default-k8s-diff-port-778490 --alsologtostderr -v=1                                                                                                                                                                                                   │ default-k8s-diff-port-778490 │ jenkins │ v1.37.0 │ 22 Dec 25 01:25 UTC │ 22 Dec 25 01:25 UTC │
	│ delete  │ -p default-k8s-diff-port-778490                                                                                                                                                                                                                          │ default-k8s-diff-port-778490 │ jenkins │ v1.37.0 │ 22 Dec 25 01:25 UTC │ 22 Dec 25 01:26 UTC │
	│ delete  │ -p default-k8s-diff-port-778490                                                                                                                                                                                                                          │ default-k8s-diff-port-778490 │ jenkins │ v1.37.0 │ 22 Dec 25 01:26 UTC │ 22 Dec 25 01:26 UTC │
	│ delete  │ -p disable-driver-mounts-459348                                                                                                                                                                                                                          │ disable-driver-mounts-459348 │ jenkins │ v1.37.0 │ 22 Dec 25 01:26 UTC │ 22 Dec 25 01:26 UTC │
	│ start   │ -p no-preload-154186 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-154186            │ jenkins │ v1.37.0 │ 22 Dec 25 01:26 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-980842 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                 │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:27 UTC │ 22 Dec 25 01:27 UTC │
	│ stop    │ -p embed-certs-980842 --alsologtostderr -v=3                                                                                                                                                                                                             │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:27 UTC │ 22 Dec 25 01:27 UTC │
	│ addons  │ enable dashboard -p embed-certs-980842 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                            │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:27 UTC │ 22 Dec 25 01:27 UTC │
	│ start   │ -p embed-certs-980842 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                                             │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:27 UTC │ 22 Dec 25 01:28 UTC │
	│ image   │ embed-certs-980842 image list --format=json                                                                                                                                                                                                              │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:28 UTC │ 22 Dec 25 01:28 UTC │
	│ pause   │ -p embed-certs-980842 --alsologtostderr -v=1                                                                                                                                                                                                             │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:28 UTC │ 22 Dec 25 01:28 UTC │
	│ unpause │ -p embed-certs-980842 --alsologtostderr -v=1                                                                                                                                                                                                             │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:28 UTC │ 22 Dec 25 01:28 UTC │
	│ delete  │ -p embed-certs-980842                                                                                                                                                                                                                                    │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:28 UTC │ 22 Dec 25 01:28 UTC │
	│ delete  │ -p embed-certs-980842                                                                                                                                                                                                                                    │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:28 UTC │ 22 Dec 25 01:28 UTC │
	│ start   │ -p newest-cni-869293 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ newest-cni-869293            │ jenkins │ v1.37.0 │ 22 Dec 25 01:28 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───
──────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/22 01:28:29
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1222 01:28:29.517235 1670843 out.go:360] Setting OutFile to fd 1 ...
	I1222 01:28:29.517360 1670843 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:28:29.517375 1670843 out.go:374] Setting ErrFile to fd 2...
	I1222 01:28:29.517381 1670843 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:28:29.517635 1670843 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1395000/.minikube/bin
	I1222 01:28:29.518139 1670843 out.go:368] Setting JSON to false
	I1222 01:28:29.519021 1670843 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":115862,"bootTime":1766251047,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1222 01:28:29.519085 1670843 start.go:143] virtualization:  
	I1222 01:28:29.523165 1670843 out.go:179] * [newest-cni-869293] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1222 01:28:29.526534 1670843 out.go:179]   - MINIKUBE_LOCATION=22179
	I1222 01:28:29.526612 1670843 notify.go:221] Checking for updates...
	I1222 01:28:29.533896 1670843 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 01:28:29.537080 1670843 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-1395000/kubeconfig
	I1222 01:28:29.540168 1670843 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1395000/.minikube
	I1222 01:28:29.543253 1670843 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1222 01:28:29.546250 1670843 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 01:28:29.549849 1670843 config.go:182] Loaded profile config "no-preload-154186": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1222 01:28:29.549971 1670843 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 01:28:29.575293 1670843 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1222 01:28:29.575440 1670843 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:28:29.641901 1670843 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 01:28:29.632088848 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:28:29.642009 1670843 docker.go:319] overlay module found
	I1222 01:28:29.645181 1670843 out.go:179] * Using the docker driver based on user configuration
	I1222 01:28:29.648076 1670843 start.go:309] selected driver: docker
	I1222 01:28:29.648097 1670843 start.go:928] validating driver "docker" against <nil>
	I1222 01:28:29.648110 1670843 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 01:28:29.648868 1670843 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:28:29.705361 1670843 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 01:28:29.695866952 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:28:29.705519 1670843 start_flags.go:329] no existing cluster config was found, will generate one from the flags 
	W1222 01:28:29.705566 1670843 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1222 01:28:29.705783 1670843 start_flags.go:1014] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1222 01:28:29.708430 1670843 out.go:179] * Using Docker driver with root privileges
	I1222 01:28:29.711256 1670843 cni.go:84] Creating CNI manager for ""
	I1222 01:28:29.711321 1670843 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1222 01:28:29.711336 1670843 start_flags.go:338] Found "CNI" CNI - setting NetworkPlugin=cni
	I1222 01:28:29.711424 1670843 start.go:353] cluster config:
	{Name:newest-cni-869293 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-869293 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:28:29.714525 1670843 out.go:179] * Starting "newest-cni-869293" primary control-plane node in "newest-cni-869293" cluster
	I1222 01:28:29.717251 1670843 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1222 01:28:29.720207 1670843 out.go:179] * Pulling base image v0.0.48-1766219634-22260 ...
	I1222 01:28:29.723048 1670843 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1222 01:28:29.723095 1670843 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4
	I1222 01:28:29.723110 1670843 cache.go:65] Caching tarball of preloaded images
	I1222 01:28:29.723137 1670843 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1222 01:28:29.723197 1670843 preload.go:251] Found /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1222 01:28:29.723207 1670843 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on containerd
	I1222 01:28:29.723316 1670843 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/config.json ...
	I1222 01:28:29.723334 1670843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/config.json: {Name:mk7d2be4f8d5fd1ff0598339a0c1f4c8dc1289c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:28:29.752728 1670843 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon, skipping pull
	I1222 01:28:29.752750 1670843 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 exists in daemon, skipping load
	I1222 01:28:29.752765 1670843 cache.go:243] Successfully downloaded all kic artifacts
	I1222 01:28:29.752806 1670843 start.go:360] acquireMachinesLock for newest-cni-869293: {Name:mke3b6ee6da4cf7fd78c9e3f2e52f52ceafca24c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:28:29.752907 1670843 start.go:364] duration metric: took 85.72µs to acquireMachinesLock for "newest-cni-869293"
	I1222 01:28:29.752938 1670843 start.go:93] Provisioning new machine with config: &{Name:newest-cni-869293 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-869293 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1222 01:28:29.753004 1670843 start.go:125] createHost starting for "" (driver="docker")
	I1222 01:28:29.756462 1670843 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1222 01:28:29.756690 1670843 start.go:159] libmachine.API.Create for "newest-cni-869293" (driver="docker")
	I1222 01:28:29.756724 1670843 client.go:173] LocalClient.Create starting
	I1222 01:28:29.756801 1670843 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem
	I1222 01:28:29.756843 1670843 main.go:144] libmachine: Decoding PEM data...
	I1222 01:28:29.756861 1670843 main.go:144] libmachine: Parsing certificate...
	I1222 01:28:29.756915 1670843 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem
	I1222 01:28:29.756931 1670843 main.go:144] libmachine: Decoding PEM data...
	I1222 01:28:29.756942 1670843 main.go:144] libmachine: Parsing certificate...
	I1222 01:28:29.757315 1670843 cli_runner.go:164] Run: docker network inspect newest-cni-869293 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1222 01:28:29.774941 1670843 cli_runner.go:211] docker network inspect newest-cni-869293 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1222 01:28:29.775018 1670843 network_create.go:284] running [docker network inspect newest-cni-869293] to gather additional debugging logs...
	I1222 01:28:29.775034 1670843 cli_runner.go:164] Run: docker network inspect newest-cni-869293
	W1222 01:28:29.790350 1670843 cli_runner.go:211] docker network inspect newest-cni-869293 returned with exit code 1
	I1222 01:28:29.790377 1670843 network_create.go:287] error running [docker network inspect newest-cni-869293]: docker network inspect newest-cni-869293: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-869293 not found
	I1222 01:28:29.790389 1670843 network_create.go:289] output of [docker network inspect newest-cni-869293]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-869293 not found
	
	** /stderr **
	I1222 01:28:29.790489 1670843 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 01:28:29.811617 1670843 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-4235b17e4b35 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:fe:6e:e2:e5:a2:3e} reservation:<nil>}
	I1222 01:28:29.811973 1670843 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-f8ef64c3e2a9 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:e2:c4:2a:ab:ba:65} reservation:<nil>}
	I1222 01:28:29.812349 1670843 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-abf79bf53636 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:c2:97:f6:b2:21:18} reservation:<nil>}
	I1222 01:28:29.812832 1670843 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019c3f50}
	I1222 01:28:29.812860 1670843 network_create.go:124] attempt to create docker network newest-cni-869293 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1222 01:28:29.812925 1670843 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-869293 newest-cni-869293
	I1222 01:28:29.869740 1670843 network_create.go:108] docker network newest-cni-869293 192.168.76.0/24 created
	I1222 01:28:29.869775 1670843 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-869293" container
	I1222 01:28:29.869851 1670843 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1222 01:28:29.886155 1670843 cli_runner.go:164] Run: docker volume create newest-cni-869293 --label name.minikube.sigs.k8s.io=newest-cni-869293 --label created_by.minikube.sigs.k8s.io=true
	I1222 01:28:29.904602 1670843 oci.go:103] Successfully created a docker volume newest-cni-869293
	I1222 01:28:29.904703 1670843 cli_runner.go:164] Run: docker run --rm --name newest-cni-869293-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-869293 --entrypoint /usr/bin/test -v newest-cni-869293:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -d /var/lib
	I1222 01:28:30.526342 1670843 oci.go:107] Successfully prepared a docker volume newest-cni-869293
	I1222 01:28:30.526403 1670843 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1222 01:28:30.526413 1670843 kic.go:194] Starting extracting preloaded images to volume ...
	I1222 01:28:30.526485 1670843 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-869293:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -I lz4 -xf /preloaded.tar -C /extractDir
	I1222 01:28:35.482640 1670843 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-869293:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -I lz4 -xf /preloaded.tar -C /extractDir: (4.956113773s)
	I1222 01:28:35.482672 1670843 kic.go:203] duration metric: took 4.956255379s to extract preloaded images to volume ...
	W1222 01:28:35.482805 1670843 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1222 01:28:35.482926 1670843 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1222 01:28:35.547405 1670843 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-869293 --name newest-cni-869293 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-869293 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-869293 --network newest-cni-869293 --ip 192.168.76.2 --volume newest-cni-869293:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5
	I1222 01:28:35.858405 1670843 cli_runner.go:164] Run: docker container inspect newest-cni-869293 --format={{.State.Running}}
	I1222 01:28:35.884175 1670843 cli_runner.go:164] Run: docker container inspect newest-cni-869293 --format={{.State.Status}}
	I1222 01:28:35.909405 1670843 cli_runner.go:164] Run: docker exec newest-cni-869293 stat /var/lib/dpkg/alternatives/iptables
	I1222 01:28:35.959724 1670843 oci.go:144] the created container "newest-cni-869293" has a running status.
	I1222 01:28:35.959757 1670843 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/newest-cni-869293/id_rsa...
	I1222 01:28:36.206517 1670843 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/newest-cni-869293/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1222 01:28:36.231277 1670843 cli_runner.go:164] Run: docker container inspect newest-cni-869293 --format={{.State.Status}}
	I1222 01:28:36.257362 1670843 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1222 01:28:36.257407 1670843 kic_runner.go:114] Args: [docker exec --privileged newest-cni-869293 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1222 01:28:36.342107 1670843 cli_runner.go:164] Run: docker container inspect newest-cni-869293 --format={{.State.Status}}
	I1222 01:28:36.366444 1670843 machine.go:94] provisionDockerMachine start ...
	I1222 01:28:36.366556 1670843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:28:36.385946 1670843 main.go:144] libmachine: Using SSH client type: native
	I1222 01:28:36.386403 1670843 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38695 <nil> <nil>}
	I1222 01:28:36.386423 1670843 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 01:28:36.387088 1670843 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:39762->127.0.0.1:38695: read: connection reset by peer
	I1222 01:28:39.522339 1670843 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-869293
	
	I1222 01:28:39.522366 1670843 ubuntu.go:182] provisioning hostname "newest-cni-869293"
	I1222 01:28:39.522451 1670843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:28:39.547399 1670843 main.go:144] libmachine: Using SSH client type: native
	I1222 01:28:39.547774 1670843 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38695 <nil> <nil>}
	I1222 01:28:39.547786 1670843 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-869293 && echo "newest-cni-869293" | sudo tee /etc/hostname
	I1222 01:28:39.694424 1670843 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-869293
	
	I1222 01:28:39.694503 1670843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:28:39.714200 1670843 main.go:144] libmachine: Using SSH client type: native
	I1222 01:28:39.714526 1670843 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38695 <nil> <nil>}
	I1222 01:28:39.714551 1670843 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-869293' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-869293/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-869293' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1222 01:28:39.850447 1670843 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 01:28:39.850500 1670843 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22179-1395000/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-1395000/.minikube}
	I1222 01:28:39.850540 1670843 ubuntu.go:190] setting up certificates
	I1222 01:28:39.850552 1670843 provision.go:84] configureAuth start
	I1222 01:28:39.850620 1670843 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-869293
	I1222 01:28:39.867785 1670843 provision.go:143] copyHostCerts
	I1222 01:28:39.867858 1670843 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.pem, removing ...
	I1222 01:28:39.867874 1670843 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.pem
	I1222 01:28:39.867957 1670843 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.pem (1082 bytes)
	I1222 01:28:39.868053 1670843 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1395000/.minikube/cert.pem, removing ...
	I1222 01:28:39.868064 1670843 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1395000/.minikube/cert.pem
	I1222 01:28:39.868091 1670843 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-1395000/.minikube/cert.pem (1123 bytes)
	I1222 01:28:39.868150 1670843 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1395000/.minikube/key.pem, removing ...
	I1222 01:28:39.868160 1670843 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1395000/.minikube/key.pem
	I1222 01:28:39.868186 1670843 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-1395000/.minikube/key.pem (1679 bytes)
	I1222 01:28:39.868234 1670843 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca-key.pem org=jenkins.newest-cni-869293 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-869293]
	I1222 01:28:40.422763 1670843 provision.go:177] copyRemoteCerts
	I1222 01:28:40.422844 1670843 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1222 01:28:40.422895 1670843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:28:40.440513 1670843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38695 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/newest-cni-869293/id_rsa Username:docker}
	I1222 01:28:40.538074 1670843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1222 01:28:40.556414 1670843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1222 01:28:40.575373 1670843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1222 01:28:40.593558 1670843 provision.go:87] duration metric: took 742.986656ms to configureAuth
	I1222 01:28:40.593586 1670843 ubuntu.go:206] setting minikube options for container-runtime
	I1222 01:28:40.593787 1670843 config.go:182] Loaded profile config "newest-cni-869293": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1222 01:28:40.593803 1670843 machine.go:97] duration metric: took 4.22733559s to provisionDockerMachine
	I1222 01:28:40.593812 1670843 client.go:176] duration metric: took 10.837081515s to LocalClient.Create
	I1222 01:28:40.593832 1670843 start.go:167] duration metric: took 10.837143899s to libmachine.API.Create "newest-cni-869293"
	I1222 01:28:40.593852 1670843 start.go:293] postStartSetup for "newest-cni-869293" (driver="docker")
	I1222 01:28:40.593867 1670843 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1222 01:28:40.593917 1670843 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1222 01:28:40.593967 1670843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:28:40.610836 1670843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38695 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/newest-cni-869293/id_rsa Username:docker}
	I1222 01:28:40.706406 1670843 ssh_runner.go:195] Run: cat /etc/os-release
	I1222 01:28:40.710036 1670843 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1222 01:28:40.710063 1670843 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1222 01:28:40.710074 1670843 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1395000/.minikube/addons for local assets ...
	I1222 01:28:40.710147 1670843 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1395000/.minikube/files for local assets ...
	I1222 01:28:40.710227 1670843 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem -> 13968642.pem in /etc/ssl/certs
	I1222 01:28:40.710335 1670843 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1222 01:28:40.718359 1670843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem --> /etc/ssl/certs/13968642.pem (1708 bytes)
	I1222 01:28:40.738021 1670843 start.go:296] duration metric: took 144.14894ms for postStartSetup
	I1222 01:28:40.738502 1670843 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-869293
	I1222 01:28:40.756061 1670843 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/config.json ...
	I1222 01:28:40.756360 1670843 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 01:28:40.756410 1670843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:28:40.773896 1670843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38695 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/newest-cni-869293/id_rsa Username:docker}
	I1222 01:28:40.867396 1670843 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1222 01:28:40.872357 1670843 start.go:128] duration metric: took 11.119339666s to createHost
	I1222 01:28:40.872384 1670843 start.go:83] releasing machines lock for "newest-cni-869293", held for 11.11946774s
	I1222 01:28:40.872459 1670843 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-869293
	I1222 01:28:40.890060 1670843 ssh_runner.go:195] Run: cat /version.json
	I1222 01:28:40.890150 1670843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:28:40.890413 1670843 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1222 01:28:40.890480 1670843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:28:40.915709 1670843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38695 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/newest-cni-869293/id_rsa Username:docker}
	I1222 01:28:40.935746 1670843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38695 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/newest-cni-869293/id_rsa Username:docker}
	I1222 01:28:41.107089 1670843 ssh_runner.go:195] Run: systemctl --version
	I1222 01:28:41.113703 1670843 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1222 01:28:41.118148 1670843 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1222 01:28:41.118236 1670843 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1222 01:28:41.147883 1670843 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1222 01:28:41.147909 1670843 start.go:496] detecting cgroup driver to use...
	I1222 01:28:41.147968 1670843 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 01:28:41.148044 1670843 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1222 01:28:41.163654 1670843 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1222 01:28:41.177388 1670843 docker.go:218] disabling cri-docker service (if available) ...
	I1222 01:28:41.177464 1670843 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1222 01:28:41.195478 1670843 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1222 01:28:41.214648 1670843 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1222 01:28:41.335255 1670843 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1222 01:28:41.459740 1670843 docker.go:234] disabling docker service ...
	I1222 01:28:41.459818 1670843 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1222 01:28:41.481154 1670843 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1222 01:28:41.495828 1670843 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1222 01:28:41.616188 1670843 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1222 01:28:41.741458 1670843 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1222 01:28:41.755996 1670843 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 01:28:41.769875 1670843 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1222 01:28:41.778912 1670843 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1222 01:28:41.787547 1670843 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1222 01:28:41.787619 1670843 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1222 01:28:41.796129 1670843 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1222 01:28:41.804747 1670843 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1222 01:28:41.813988 1670843 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1222 01:28:41.823382 1670843 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1222 01:28:41.831512 1670843 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1222 01:28:41.840674 1670843 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1222 01:28:41.849663 1670843 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1222 01:28:41.859033 1670843 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1222 01:28:41.866669 1670843 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1222 01:28:41.874404 1670843 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:28:41.996020 1670843 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1222 01:28:42.147980 1670843 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1222 01:28:42.148078 1670843 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1222 01:28:42.153206 1670843 start.go:564] Will wait 60s for crictl version
	I1222 01:28:42.153310 1670843 ssh_runner.go:195] Run: which crictl
	I1222 01:28:42.158111 1670843 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1222 01:28:42.188961 1670843 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.1
	RuntimeApiVersion:  v1
	I1222 01:28:42.189058 1670843 ssh_runner.go:195] Run: containerd --version
	I1222 01:28:42.212951 1670843 ssh_runner.go:195] Run: containerd --version
	I1222 01:28:42.242832 1670843 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.1 ...
	I1222 01:28:42.245962 1670843 cli_runner.go:164] Run: docker network inspect newest-cni-869293 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 01:28:42.263401 1670843 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1222 01:28:42.267705 1670843 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 01:28:42.281779 1670843 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1222 01:28:42.284630 1670843 kubeadm.go:884] updating cluster {Name:newest-cni-869293 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-869293 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1222 01:28:42.284796 1670843 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1222 01:28:42.284882 1670843 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 01:28:42.315646 1670843 containerd.go:627] all images are preloaded for containerd runtime.
	I1222 01:28:42.315686 1670843 containerd.go:534] Images already preloaded, skipping extraction
	I1222 01:28:42.315760 1670843 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 01:28:42.343479 1670843 containerd.go:627] all images are preloaded for containerd runtime.
	I1222 01:28:42.343504 1670843 cache_images.go:86] Images are preloaded, skipping loading
	I1222 01:28:42.343513 1670843 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-rc.1 containerd true true} ...
	I1222 01:28:42.343653 1670843 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-869293 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-869293 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1222 01:28:42.343729 1670843 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
	I1222 01:28:42.368317 1670843 cni.go:84] Creating CNI manager for ""
	I1222 01:28:42.368388 1670843 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1222 01:28:42.368427 1670843 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1222 01:28:42.368461 1670843 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-869293 NodeName:newest-cni-869293 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1222 01:28:42.368600 1670843 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-869293"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1222 01:28:42.368678 1670843 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1222 01:28:42.376805 1670843 binaries.go:51] Found k8s binaries, skipping transfer
	I1222 01:28:42.376907 1670843 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1222 01:28:42.384913 1670843 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1222 01:28:42.398203 1670843 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1222 01:28:42.412066 1670843 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2233 bytes)
	I1222 01:28:42.425146 1670843 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1222 01:28:42.428711 1670843 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 01:28:42.438586 1670843 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:28:42.581997 1670843 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 01:28:42.598481 1670843 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293 for IP: 192.168.76.2
	I1222 01:28:42.598505 1670843 certs.go:195] generating shared ca certs ...
	I1222 01:28:42.598523 1670843 certs.go:227] acquiring lock for ca certs: {Name:mk4e1172e73c8d9b926824a39d7e920772302ed7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:28:42.598712 1670843 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.key
	I1222 01:28:42.598780 1670843 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.key
	I1222 01:28:42.598795 1670843 certs.go:257] generating profile certs ...
	I1222 01:28:42.598868 1670843 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/client.key
	I1222 01:28:42.598888 1670843 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/client.crt with IP's: []
	I1222 01:28:43.368024 1670843 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/client.crt ...
	I1222 01:28:43.368059 1670843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/client.crt: {Name:mkfc3a338fdb42add5491ce4694522898b79b83c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:28:43.368262 1670843 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/client.key ...
	I1222 01:28:43.368276 1670843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/client.key: {Name:mkea74dd50bc644b440bafb99fc54190912b7665 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:28:43.368378 1670843 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/apiserver.key.70db33ce
	I1222 01:28:43.368397 1670843 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/apiserver.crt.70db33ce with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1222 01:28:43.608821 1670843 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/apiserver.crt.70db33ce ...
	I1222 01:28:43.608852 1670843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/apiserver.crt.70db33ce: {Name:mk0db6b3e8c9bf7aff940b44fd05b130d9d585d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:28:43.609048 1670843 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/apiserver.key.70db33ce ...
	I1222 01:28:43.609064 1670843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/apiserver.key.70db33ce: {Name:mk9a3f763ae7146332940a9b4d9169402652e2d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:28:43.609162 1670843 certs.go:382] copying /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/apiserver.crt.70db33ce -> /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/apiserver.crt
	I1222 01:28:43.609244 1670843 certs.go:386] copying /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/apiserver.key.70db33ce -> /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/apiserver.key
	I1222 01:28:43.609298 1670843 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/proxy-client.key
	I1222 01:28:43.609318 1670843 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/proxy-client.crt with IP's: []
	I1222 01:28:43.826780 1670843 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/proxy-client.crt ...
	I1222 01:28:43.826812 1670843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/proxy-client.crt: {Name:mk2b8cedfe513097eb57f8b68379ebde37c90b21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:28:43.827666 1670843 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/proxy-client.key ...
	I1222 01:28:43.827685 1670843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/proxy-client.key: {Name:mk5be442e41d6696d708120ad1b125b0231d124b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:28:43.827919 1670843 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/1396864.pem (1338 bytes)
	W1222 01:28:43.827970 1670843 certs.go:480] ignoring /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/1396864_empty.pem, impossibly tiny 0 bytes
	I1222 01:28:43.827983 1670843 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca-key.pem (1675 bytes)
	I1222 01:28:43.828011 1670843 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem (1082 bytes)
	I1222 01:28:43.828040 1670843 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem (1123 bytes)
	I1222 01:28:43.828064 1670843 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/key.pem (1679 bytes)
	I1222 01:28:43.828110 1670843 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem (1708 bytes)
	I1222 01:28:43.828781 1670843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1222 01:28:43.849273 1670843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1222 01:28:43.869748 1670843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1222 01:28:43.888552 1670843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1222 01:28:43.907953 1670843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1222 01:28:43.926293 1670843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1222 01:28:43.944116 1670843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1222 01:28:43.961772 1670843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1222 01:28:43.979659 1670843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1222 01:28:44.001330 1670843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/1396864.pem --> /usr/share/ca-certificates/1396864.pem (1338 bytes)
	I1222 01:28:44.024142 1670843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem --> /usr/share/ca-certificates/13968642.pem (1708 bytes)
	I1222 01:28:44.045144 1670843 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1222 01:28:44.059777 1670843 ssh_runner.go:195] Run: openssl version
	I1222 01:28:44.066272 1670843 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:28:44.074265 1670843 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1222 01:28:44.081820 1670843 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:28:44.085681 1670843 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 00:04 /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:28:44.085753 1670843 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:28:44.127374 1670843 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1222 01:28:44.135158 1670843 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1222 01:28:44.143793 1670843 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1396864.pem
	I1222 01:28:44.151667 1670843 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1396864.pem /etc/ssl/certs/1396864.pem
	I1222 01:28:44.159424 1670843 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1396864.pem
	I1222 01:28:44.163331 1670843 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 00:13 /usr/share/ca-certificates/1396864.pem
	I1222 01:28:44.163415 1670843 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1396864.pem
	I1222 01:28:44.204528 1670843 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1222 01:28:44.212004 1670843 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1396864.pem /etc/ssl/certs/51391683.0
	I1222 01:28:44.219747 1670843 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/13968642.pem
	I1222 01:28:44.227544 1670843 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/13968642.pem /etc/ssl/certs/13968642.pem
	I1222 01:28:44.235151 1670843 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13968642.pem
	I1222 01:28:44.239441 1670843 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 00:13 /usr/share/ca-certificates/13968642.pem
	I1222 01:28:44.239512 1670843 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13968642.pem
	I1222 01:28:44.281186 1670843 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1222 01:28:44.288873 1670843 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/13968642.pem /etc/ssl/certs/3ec20f2e.0
	I1222 01:28:44.296839 1670843 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 01:28:44.300500 1670843 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1222 01:28:44.300563 1670843 kubeadm.go:401] StartCluster: {Name:newest-cni-869293 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-869293 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:28:44.300658 1670843 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1222 01:28:44.300723 1670843 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 01:28:44.328761 1670843 cri.go:96] found id: ""
	I1222 01:28:44.328834 1670843 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1222 01:28:44.336639 1670843 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1222 01:28:44.344992 1670843 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1222 01:28:44.345060 1670843 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 01:28:44.353083 1670843 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1222 01:28:44.353102 1670843 kubeadm.go:158] found existing configuration files:
	
	I1222 01:28:44.353165 1670843 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1222 01:28:44.361047 1670843 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1222 01:28:44.361129 1670843 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1222 01:28:44.368921 1670843 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1222 01:28:44.377002 1670843 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1222 01:28:44.377072 1670843 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 01:28:44.384992 1670843 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1222 01:28:44.393081 1670843 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1222 01:28:44.393153 1670843 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 01:28:44.400779 1670843 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1222 01:28:44.408653 1670843 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1222 01:28:44.408723 1670843 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 01:28:44.416474 1670843 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1222 01:28:44.452812 1670843 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1222 01:28:44.452877 1670843 kubeadm.go:319] [preflight] Running pre-flight checks
	I1222 01:28:44.531198 1670843 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1222 01:28:44.531307 1670843 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1222 01:28:44.531355 1670843 kubeadm.go:319] OS: Linux
	I1222 01:28:44.531413 1670843 kubeadm.go:319] CGROUPS_CPU: enabled
	I1222 01:28:44.531466 1670843 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1222 01:28:44.531524 1670843 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1222 01:28:44.531590 1670843 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1222 01:28:44.531650 1670843 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1222 01:28:44.531733 1670843 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1222 01:28:44.531797 1670843 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1222 01:28:44.531864 1670843 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1222 01:28:44.531927 1670843 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1222 01:28:44.604053 1670843 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1222 01:28:44.604174 1670843 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1222 01:28:44.604270 1670843 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1222 01:28:44.614524 1670843 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1222 01:28:44.617569 1670843 out.go:252]   - Generating certificates and keys ...
	I1222 01:28:44.617679 1670843 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1222 01:28:44.617781 1670843 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1222 01:28:44.891888 1670843 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1222 01:28:45.152805 1670843 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1222 01:28:45.274684 1670843 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1222 01:28:45.517117 1670843 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1222 01:28:45.648291 1670843 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1222 01:28:45.648639 1670843 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-869293] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1222 01:28:45.933782 1670843 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1222 01:28:45.933931 1670843 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-869293] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1222 01:28:46.072331 1670843 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1222 01:28:46.408818 1670843 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1222 01:28:46.502126 1670843 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1222 01:28:46.502542 1670843 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1222 01:28:46.995871 1670843 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1222 01:28:47.191545 1670843 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1222 01:28:47.264763 1670843 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1222 01:28:47.533721 1670843 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1222 01:28:47.788353 1670843 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1222 01:28:47.789301 1670843 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1222 01:28:47.796939 1670843 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1222 01:28:47.800747 1670843 out.go:252]   - Booting up control plane ...
	I1222 01:28:47.800866 1670843 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1222 01:28:47.807894 1670843 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1222 01:28:47.807975 1670843 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1222 01:28:47.826994 1670843 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1222 01:28:47.827460 1670843 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1222 01:28:47.835465 1670843 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1222 01:28:47.835861 1670843 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1222 01:28:47.836121 1670843 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1222 01:28:47.973178 1670843 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1222 01:28:47.973389 1670843 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1222 01:30:31.528083 1661698 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001198283s
	I1222 01:30:31.528113 1661698 kubeadm.go:319] 
	I1222 01:30:31.528168 1661698 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1222 01:30:31.528204 1661698 kubeadm.go:319] 	- The kubelet is not running
	I1222 01:30:31.528304 1661698 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1222 01:30:31.528309 1661698 kubeadm.go:319] 
	I1222 01:30:31.528414 1661698 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1222 01:30:31.528445 1661698 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1222 01:30:31.528475 1661698 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1222 01:30:31.528479 1661698 kubeadm.go:319] 
	I1222 01:30:31.534468 1661698 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1222 01:30:31.534939 1661698 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1222 01:30:31.535063 1661698 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1222 01:30:31.535323 1661698 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1222 01:30:31.535332 1661698 kubeadm.go:319] 
	I1222 01:30:31.535406 1661698 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1222 01:30:31.535527 1661698 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-154186] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-154186] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001198283s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1222 01:30:31.535610 1661698 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1222 01:30:31.944583 1661698 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 01:30:31.959295 1661698 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1222 01:30:31.959366 1661698 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 01:30:31.967786 1661698 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1222 01:30:31.967809 1661698 kubeadm.go:158] found existing configuration files:
	
	I1222 01:30:31.967875 1661698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1222 01:30:31.976572 1661698 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1222 01:30:31.976645 1661698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1222 01:30:31.984566 1661698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1222 01:30:31.995193 1661698 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1222 01:30:31.995287 1661698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 01:30:32.006676 1661698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1222 01:30:32.018592 1661698 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1222 01:30:32.018733 1661698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 01:30:32.028237 1661698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1222 01:30:32.043043 1661698 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1222 01:30:32.043172 1661698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 01:30:32.052235 1661698 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1222 01:30:32.094134 1661698 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1222 01:30:32.094474 1661698 kubeadm.go:319] [preflight] Running pre-flight checks
	I1222 01:30:32.174573 1661698 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1222 01:30:32.174734 1661698 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1222 01:30:32.174816 1661698 kubeadm.go:319] OS: Linux
	I1222 01:30:32.174901 1661698 kubeadm.go:319] CGROUPS_CPU: enabled
	I1222 01:30:32.174991 1661698 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1222 01:30:32.175073 1661698 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1222 01:30:32.175183 1661698 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1222 01:30:32.175273 1661698 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1222 01:30:32.175356 1661698 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1222 01:30:32.175408 1661698 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1222 01:30:32.175461 1661698 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1222 01:30:32.175510 1661698 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1222 01:30:32.244754 1661698 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1222 01:30:32.244977 1661698 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1222 01:30:32.245121 1661698 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1222 01:30:32.250598 1661698 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1222 01:30:32.253515 1661698 out.go:252]   - Generating certificates and keys ...
	I1222 01:30:32.253625 1661698 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1222 01:30:32.253721 1661698 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1222 01:30:32.253819 1661698 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1222 01:30:32.253899 1661698 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1222 01:30:32.253989 1661698 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1222 01:30:32.254107 1661698 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1222 01:30:32.254400 1661698 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1222 01:30:32.254506 1661698 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1222 01:30:32.254956 1661698 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1222 01:30:32.255237 1661698 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1222 01:30:32.255491 1661698 kubeadm.go:319] [certs] Using the existing "sa" key
	I1222 01:30:32.255552 1661698 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1222 01:30:32.402631 1661698 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1222 01:30:32.599258 1661698 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1222 01:30:33.036089 1661698 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1222 01:30:33.328680 1661698 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1222 01:30:33.401037 1661698 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1222 01:30:33.401569 1661698 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1222 01:30:33.404184 1661698 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1222 01:30:33.407507 1661698 out.go:252]   - Booting up control plane ...
	I1222 01:30:33.407615 1661698 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1222 01:30:33.407700 1661698 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1222 01:30:33.407772 1661698 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1222 01:30:33.430782 1661698 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1222 01:30:33.431319 1661698 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1222 01:30:33.439215 1661698 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1222 01:30:33.439558 1661698 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1222 01:30:33.439606 1661698 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1222 01:30:33.604011 1661698 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1222 01:30:33.604133 1661698 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1222 01:32:47.973116 1670843 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000283341s
	I1222 01:32:47.973157 1670843 kubeadm.go:319] 
	I1222 01:32:47.973324 1670843 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1222 01:32:47.973386 1670843 kubeadm.go:319] 	- The kubelet is not running
	I1222 01:32:47.973953 1670843 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1222 01:32:47.974135 1670843 kubeadm.go:319] 
	I1222 01:32:47.974333 1670843 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1222 01:32:47.974391 1670843 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1222 01:32:47.974447 1670843 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1222 01:32:47.974452 1670843 kubeadm.go:319] 
	I1222 01:32:47.979596 1670843 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1222 01:32:47.980069 1670843 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1222 01:32:47.980186 1670843 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1222 01:32:47.980594 1670843 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1222 01:32:47.980620 1670843 kubeadm.go:319] 
	I1222 01:32:47.980717 1670843 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1222 01:32:47.980850 1670843 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-869293] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-869293] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000283341s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1222 01:32:47.980937 1670843 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1222 01:32:48.391513 1670843 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 01:32:48.405347 1670843 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1222 01:32:48.405424 1670843 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 01:32:48.413621 1670843 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1222 01:32:48.413642 1670843 kubeadm.go:158] found existing configuration files:
	
	I1222 01:32:48.413694 1670843 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1222 01:32:48.421650 1670843 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1222 01:32:48.421714 1670843 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1222 01:32:48.429403 1670843 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1222 01:32:48.437071 1670843 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1222 01:32:48.437146 1670843 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 01:32:48.444785 1670843 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1222 01:32:48.452627 1670843 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1222 01:32:48.452694 1670843 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 01:32:48.460251 1670843 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1222 01:32:48.468521 1670843 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1222 01:32:48.468599 1670843 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 01:32:48.476496 1670843 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1222 01:32:48.517508 1670843 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1222 01:32:48.517575 1670843 kubeadm.go:319] [preflight] Running pre-flight checks
	I1222 01:32:48.594935 1670843 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1222 01:32:48.595008 1670843 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1222 01:32:48.595046 1670843 kubeadm.go:319] OS: Linux
	I1222 01:32:48.595095 1670843 kubeadm.go:319] CGROUPS_CPU: enabled
	I1222 01:32:48.595143 1670843 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1222 01:32:48.595192 1670843 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1222 01:32:48.595240 1670843 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1222 01:32:48.595288 1670843 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1222 01:32:48.595340 1670843 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1222 01:32:48.595387 1670843 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1222 01:32:48.595435 1670843 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1222 01:32:48.595482 1670843 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1222 01:32:48.660826 1670843 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1222 01:32:48.660943 1670843 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1222 01:32:48.661079 1670843 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1222 01:32:48.666682 1670843 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1222 01:32:48.672026 1670843 out.go:252]   - Generating certificates and keys ...
	I1222 01:32:48.672133 1670843 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1222 01:32:48.672215 1670843 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1222 01:32:48.672316 1670843 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1222 01:32:48.672398 1670843 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1222 01:32:48.672480 1670843 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1222 01:32:48.672546 1670843 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1222 01:32:48.672621 1670843 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1222 01:32:48.672694 1670843 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1222 01:32:48.672781 1670843 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1222 01:32:48.672898 1670843 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1222 01:32:48.672968 1670843 kubeadm.go:319] [certs] Using the existing "sa" key
	I1222 01:32:48.673051 1670843 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1222 01:32:48.931141 1670843 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1222 01:32:49.321960 1670843 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1222 01:32:49.787743 1670843 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1222 01:32:49.993441 1670843 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1222 01:32:50.084543 1670843 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1222 01:32:50.085011 1670843 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1222 01:32:50.087783 1670843 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1222 01:32:50.091167 1670843 out.go:252]   - Booting up control plane ...
	I1222 01:32:50.091277 1670843 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1222 01:32:50.091357 1670843 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1222 01:32:50.091431 1670843 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1222 01:32:50.113375 1670843 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1222 01:32:50.113487 1670843 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1222 01:32:50.122587 1670843 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1222 01:32:50.123919 1670843 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1222 01:32:50.124082 1670843 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1222 01:32:50.256676 1670843 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1222 01:32:50.256808 1670843 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1222 01:34:33.605118 1661698 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001202243s
	I1222 01:34:33.610423 1661698 kubeadm.go:319] 
	I1222 01:34:33.610510 1661698 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1222 01:34:33.610555 1661698 kubeadm.go:319] 	- The kubelet is not running
	I1222 01:34:33.610673 1661698 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1222 01:34:33.610685 1661698 kubeadm.go:319] 
	I1222 01:34:33.610798 1661698 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1222 01:34:33.610837 1661698 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1222 01:34:33.610875 1661698 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1222 01:34:33.610884 1661698 kubeadm.go:319] 
	I1222 01:34:33.611729 1661698 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1222 01:34:33.612160 1661698 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1222 01:34:33.612286 1661698 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1222 01:34:33.612615 1661698 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1222 01:34:33.612639 1661698 kubeadm.go:319] 
	I1222 01:34:33.612738 1661698 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1222 01:34:33.612826 1661698 kubeadm.go:403] duration metric: took 8m6.521308561s to StartCluster
	I1222 01:34:33.612869 1661698 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:34:33.612963 1661698 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:34:33.638994 1661698 cri.go:96] found id: ""
	I1222 01:34:33.639065 1661698 logs.go:282] 0 containers: []
	W1222 01:34:33.639100 1661698 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:34:33.639124 1661698 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:34:33.639214 1661698 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:34:33.664403 1661698 cri.go:96] found id: ""
	I1222 01:34:33.664427 1661698 logs.go:282] 0 containers: []
	W1222 01:34:33.664436 1661698 logs.go:284] No container was found matching "etcd"
	I1222 01:34:33.664446 1661698 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:34:33.664509 1661698 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:34:33.695712 1661698 cri.go:96] found id: ""
	I1222 01:34:33.695738 1661698 logs.go:282] 0 containers: []
	W1222 01:34:33.695748 1661698 logs.go:284] No container was found matching "coredns"
	I1222 01:34:33.695754 1661698 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:34:33.695824 1661698 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:34:33.725832 1661698 cri.go:96] found id: ""
	I1222 01:34:33.725860 1661698 logs.go:282] 0 containers: []
	W1222 01:34:33.725869 1661698 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:34:33.725877 1661698 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:34:33.725946 1661698 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:34:33.754495 1661698 cri.go:96] found id: ""
	I1222 01:34:33.754525 1661698 logs.go:282] 0 containers: []
	W1222 01:34:33.754545 1661698 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:34:33.754568 1661698 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:34:33.754673 1661698 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:34:33.781930 1661698 cri.go:96] found id: ""
	I1222 01:34:33.781958 1661698 logs.go:282] 0 containers: []
	W1222 01:34:33.781967 1661698 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:34:33.781974 1661698 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:34:33.782035 1661698 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:34:33.809335 1661698 cri.go:96] found id: ""
	I1222 01:34:33.809412 1661698 logs.go:282] 0 containers: []
	W1222 01:34:33.809435 1661698 logs.go:284] No container was found matching "kindnet"
	I1222 01:34:33.809461 1661698 logs.go:123] Gathering logs for kubelet ...
	I1222 01:34:33.809500 1661698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:34:33.870551 1661698 logs.go:123] Gathering logs for dmesg ...
	I1222 01:34:33.870590 1661698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:34:33.887403 1661698 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:34:33.887432 1661698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:34:33.956483 1661698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:34:33.946406    5431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:34:33.946913    5431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:34:33.948611    5431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:34:33.949424    5431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:34:33.951490    5431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:34:33.946406    5431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:34:33.946913    5431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:34:33.948611    5431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:34:33.949424    5431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:34:33.951490    5431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:34:33.956566 1661698 logs.go:123] Gathering logs for containerd ...
	I1222 01:34:33.956596 1661698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:34:34.001866 1661698 logs.go:123] Gathering logs for container status ...
	I1222 01:34:34.001912 1661698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 01:34:34.042014 1661698 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001202243s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1222 01:34:34.042151 1661698 out.go:285] * 
	W1222 01:34:34.042248 1661698 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001202243s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1222 01:34:34.042299 1661698 out.go:285] * 
	W1222 01:34:34.044850 1661698 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1222 01:34:34.050486 1661698 out.go:203] 
	W1222 01:34:34.053486 1661698 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001202243s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1222 01:34:34.053539 1661698 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1222 01:34:34.053562 1661698 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1222 01:34:34.056784 1661698 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 22 01:26:14 no-preload-154186 containerd[756]: time="2025-12-22T01:26:14.470410254Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.35.0-rc.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 22 01:26:15 no-preload-154186 containerd[756]: time="2025-12-22T01:26:15.787207960Z" level=info msg="No images store for sha256:93523640e0a56d4e8b1c8a3497b218ff0cad45dc41c5de367125514543645a73"
	Dec 22 01:26:15 no-preload-154186 containerd[756]: time="2025-12-22T01:26:15.789433827Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-rc.1\""
	Dec 22 01:26:15 no-preload-154186 containerd[756]: time="2025-12-22T01:26:15.803823042Z" level=info msg="ImageCreate event name:\"sha256:a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 22 01:26:15 no-preload-154186 containerd[756]: time="2025-12-22T01:26:15.804652777Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-rc.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 22 01:26:17 no-preload-154186 containerd[756]: time="2025-12-22T01:26:17.331340037Z" level=info msg="No images store for sha256:5e4a4fe83792bf529a4e283e09069cf50cc9882d04168a33903ed6809a492e61"
	Dec 22 01:26:17 no-preload-154186 containerd[756]: time="2025-12-22T01:26:17.334400636Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\""
	Dec 22 01:26:17 no-preload-154186 containerd[756]: time="2025-12-22T01:26:17.353661044Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 22 01:26:17 no-preload-154186 containerd[756]: time="2025-12-22T01:26:17.357804008Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 22 01:26:19 no-preload-154186 containerd[756]: time="2025-12-22T01:26:19.359874060Z" level=info msg="No images store for sha256:78d3927c747311a5af27ec923ab6d07a2c1ad9cff4754323abf6c5c08cf054a5"
	Dec 22 01:26:19 no-preload-154186 containerd[756]: time="2025-12-22T01:26:19.362802572Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\""
	Dec 22 01:26:19 no-preload-154186 containerd[756]: time="2025-12-22T01:26:19.384615077Z" level=info msg="ImageCreate event name:\"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 22 01:26:19 no-preload-154186 containerd[756]: time="2025-12-22T01:26:19.388118298Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 22 01:26:21 no-preload-154186 containerd[756]: time="2025-12-22T01:26:21.047922253Z" level=info msg="No images store for sha256:e78123e3dd3a833d4e1feffb3fc0a121f3dd689abacf9b7f8984f026b95c56ec"
	Dec 22 01:26:21 no-preload-154186 containerd[756]: time="2025-12-22T01:26:21.050246394Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.0-rc.1\""
	Dec 22 01:26:21 no-preload-154186 containerd[756]: time="2025-12-22T01:26:21.057661077Z" level=info msg="ImageCreate event name:\"sha256:7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 22 01:26:21 no-preload-154186 containerd[756]: time="2025-12-22T01:26:21.058387369Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.35.0-rc.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 22 01:26:22 no-preload-154186 containerd[756]: time="2025-12-22T01:26:22.695409612Z" level=info msg="No images store for sha256:90c4ca45066b118d6cc8f6102ba2fea77739b71c04f0bdafeef225127738ea35"
	Dec 22 01:26:22 no-preload-154186 containerd[756]: time="2025-12-22T01:26:22.698274402Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-rc.1\""
	Dec 22 01:26:22 no-preload-154186 containerd[756]: time="2025-12-22T01:26:22.719228692Z" level=info msg="ImageCreate event name:\"sha256:3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 22 01:26:22 no-preload-154186 containerd[756]: time="2025-12-22T01:26:22.720159212Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-rc.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 22 01:26:23 no-preload-154186 containerd[756]: time="2025-12-22T01:26:23.239290626Z" level=info msg="No images store for sha256:7475c7d18769df89a804d5bebf679dbf94886f3626f07a2be923beaa0cc7e5b0"
	Dec 22 01:26:23 no-preload-154186 containerd[756]: time="2025-12-22T01:26:23.241741545Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\""
	Dec 22 01:26:23 no-preload-154186 containerd[756]: time="2025-12-22T01:26:23.251782100Z" level=info msg="ImageCreate event name:\"sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 22 01:26:23 no-preload-154186 containerd[756]: time="2025-12-22T01:26:23.252093930Z" level=info msg="ImageUpdate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:34:36.697799    5678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:34:36.698426    5678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:34:36.699968    5678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:34:36.700536    5678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:34:36.702261    5678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec22 00:03] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 01:34:36 up 1 day,  8:17,  0 user,  load average: 0.65, 1.24, 1.93
	Linux no-preload-154186 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 22 01:34:33 no-preload-154186 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:34:33 no-preload-154186 kubelet[5365]: E1222 01:34:33.553784    5365 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 01:34:33 no-preload-154186 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 01:34:33 no-preload-154186 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 01:34:34 no-preload-154186 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 22 01:34:34 no-preload-154186 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:34:34 no-preload-154186 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:34:34 no-preload-154186 kubelet[5449]: E1222 01:34:34.366562    5449 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 01:34:34 no-preload-154186 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 01:34:34 no-preload-154186 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 01:34:34 no-preload-154186 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 22 01:34:34 no-preload-154186 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:34:34 no-preload-154186 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:34:35 no-preload-154186 kubelet[5536]: E1222 01:34:35.080688    5536 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 01:34:35 no-preload-154186 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 01:34:35 no-preload-154186 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 01:34:35 no-preload-154186 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 22 01:34:35 no-preload-154186 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:34:35 no-preload-154186 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:34:35 no-preload-154186 kubelet[5580]: E1222 01:34:35.886384    5580 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 01:34:35 no-preload-154186 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 01:34:35 no-preload-154186 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 01:34:36 no-preload-154186 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Dec 22 01:34:36 no-preload-154186 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:34:36 no-preload-154186 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-154186 -n no-preload-154186
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-154186 -n no-preload-154186: exit status 6 (312.474032ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1222 01:34:37.140061 1678717 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-154186" does not appear in /home/jenkins/minikube-integration/22179-1395000/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "no-preload-154186" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-154186
helpers_test.go:244: (dbg) docker inspect no-preload-154186:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "66e4ffc32ac4d9a593d1b758510a0770f54cbfe4fc8bd1c7d5d5625196ae8de7",
	        "Created": "2025-12-22T01:26:03.981102368Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1662042,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-22T01:26:04.182435984Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:065a636b8735485f57df2b02ed6532902f189a9c5dc304ae0ae68a778e1c9b2c",
	        "ResolvConfPath": "/var/lib/docker/containers/66e4ffc32ac4d9a593d1b758510a0770f54cbfe4fc8bd1c7d5d5625196ae8de7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/66e4ffc32ac4d9a593d1b758510a0770f54cbfe4fc8bd1c7d5d5625196ae8de7/hostname",
	        "HostsPath": "/var/lib/docker/containers/66e4ffc32ac4d9a593d1b758510a0770f54cbfe4fc8bd1c7d5d5625196ae8de7/hosts",
	        "LogPath": "/var/lib/docker/containers/66e4ffc32ac4d9a593d1b758510a0770f54cbfe4fc8bd1c7d5d5625196ae8de7/66e4ffc32ac4d9a593d1b758510a0770f54cbfe4fc8bd1c7d5d5625196ae8de7-json.log",
	        "Name": "/no-preload-154186",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-154186:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-154186",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "66e4ffc32ac4d9a593d1b758510a0770f54cbfe4fc8bd1c7d5d5625196ae8de7",
	                "LowerDir": "/var/lib/docker/overlay2/c8373d16a8e2d6ece6c54ed2fe561f951d426d306116bd1c2373e51241872490-init/diff:/var/lib/docker/overlay2/fdfc0dccbb3b6766b7981a5669907f62e120bbe774767ccbee34e6115374625f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c8373d16a8e2d6ece6c54ed2fe561f951d426d306116bd1c2373e51241872490/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c8373d16a8e2d6ece6c54ed2fe561f951d426d306116bd1c2373e51241872490/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c8373d16a8e2d6ece6c54ed2fe561f951d426d306116bd1c2373e51241872490/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-154186",
	                "Source": "/var/lib/docker/volumes/no-preload-154186/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-154186",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-154186",
	                "name.minikube.sigs.k8s.io": "no-preload-154186",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4c84f7a2a870d36246b7a801b7bf7055532e2138e424e145ab2b2ac49b81f1d2",
	            "SandboxKey": "/var/run/docker/netns/4c84f7a2a870",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38685"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38686"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38689"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38687"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38688"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-154186": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2e:f0:0c:13:3c:8d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "16496b4bf71c897a797bd98394f7b6821dd2bd1d65c81e9f1efd0fcd1f14d1a5",
	                    "EndpointID": "60d289b6dced5e2b95a24119998812f02b77a1cbd32a594dea6fb7ca62aa8c31",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-154186",
	                        "66e4ffc32ac4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-154186 -n no-preload-154186
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-154186 -n no-preload-154186: exit status 6 (354.043879ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1222 01:34:37.511739 1678795 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-154186" does not appear in /home/jenkins/minikube-integration/22179-1395000/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-154186 logs -n 25
helpers_test.go:261: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───
──────────────────┐
	│ COMMAND │                                                                                                                           ARGS                                                                                                                           │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───
──────────────────┤
	│ addons  │ enable dashboard -p default-k8s-diff-port-778490 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ default-k8s-diff-port-778490 │ jenkins │ v1.37.0 │ 22 Dec 25 01:24 UTC │ 22 Dec 25 01:24 UTC │
	│ start   │ -p default-k8s-diff-port-778490 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-778490 │ jenkins │ v1.37.0 │ 22 Dec 25 01:24 UTC │ 22 Dec 25 01:25 UTC │
	│ image   │ old-k8s-version-433815 image list --format=json                                                                                                                                                                                                          │ old-k8s-version-433815       │ jenkins │ v1.37.0 │ 22 Dec 25 01:25 UTC │ 22 Dec 25 01:25 UTC │
	│ pause   │ -p old-k8s-version-433815 --alsologtostderr -v=1                                                                                                                                                                                                         │ old-k8s-version-433815       │ jenkins │ v1.37.0 │ 22 Dec 25 01:25 UTC │ 22 Dec 25 01:25 UTC │
	│ unpause │ -p old-k8s-version-433815 --alsologtostderr -v=1                                                                                                                                                                                                         │ old-k8s-version-433815       │ jenkins │ v1.37.0 │ 22 Dec 25 01:25 UTC │ 22 Dec 25 01:25 UTC │
	│ delete  │ -p old-k8s-version-433815                                                                                                                                                                                                                                │ old-k8s-version-433815       │ jenkins │ v1.37.0 │ 22 Dec 25 01:25 UTC │ 22 Dec 25 01:25 UTC │
	│ delete  │ -p old-k8s-version-433815                                                                                                                                                                                                                                │ old-k8s-version-433815       │ jenkins │ v1.37.0 │ 22 Dec 25 01:25 UTC │ 22 Dec 25 01:25 UTC │
	│ start   │ -p embed-certs-980842 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                                             │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:25 UTC │ 22 Dec 25 01:26 UTC │
	│ image   │ default-k8s-diff-port-778490 image list --format=json                                                                                                                                                                                                    │ default-k8s-diff-port-778490 │ jenkins │ v1.37.0 │ 22 Dec 25 01:25 UTC │ 22 Dec 25 01:25 UTC │
	│ pause   │ -p default-k8s-diff-port-778490 --alsologtostderr -v=1                                                                                                                                                                                                   │ default-k8s-diff-port-778490 │ jenkins │ v1.37.0 │ 22 Dec 25 01:25 UTC │ 22 Dec 25 01:25 UTC │
	│ unpause │ -p default-k8s-diff-port-778490 --alsologtostderr -v=1                                                                                                                                                                                                   │ default-k8s-diff-port-778490 │ jenkins │ v1.37.0 │ 22 Dec 25 01:25 UTC │ 22 Dec 25 01:25 UTC │
	│ delete  │ -p default-k8s-diff-port-778490                                                                                                                                                                                                                          │ default-k8s-diff-port-778490 │ jenkins │ v1.37.0 │ 22 Dec 25 01:25 UTC │ 22 Dec 25 01:26 UTC │
	│ delete  │ -p default-k8s-diff-port-778490                                                                                                                                                                                                                          │ default-k8s-diff-port-778490 │ jenkins │ v1.37.0 │ 22 Dec 25 01:26 UTC │ 22 Dec 25 01:26 UTC │
	│ delete  │ -p disable-driver-mounts-459348                                                                                                                                                                                                                          │ disable-driver-mounts-459348 │ jenkins │ v1.37.0 │ 22 Dec 25 01:26 UTC │ 22 Dec 25 01:26 UTC │
	│ start   │ -p no-preload-154186 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-154186            │ jenkins │ v1.37.0 │ 22 Dec 25 01:26 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-980842 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                 │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:27 UTC │ 22 Dec 25 01:27 UTC │
	│ stop    │ -p embed-certs-980842 --alsologtostderr -v=3                                                                                                                                                                                                             │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:27 UTC │ 22 Dec 25 01:27 UTC │
	│ addons  │ enable dashboard -p embed-certs-980842 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                            │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:27 UTC │ 22 Dec 25 01:27 UTC │
	│ start   │ -p embed-certs-980842 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                                             │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:27 UTC │ 22 Dec 25 01:28 UTC │
	│ image   │ embed-certs-980842 image list --format=json                                                                                                                                                                                                              │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:28 UTC │ 22 Dec 25 01:28 UTC │
	│ pause   │ -p embed-certs-980842 --alsologtostderr -v=1                                                                                                                                                                                                             │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:28 UTC │ 22 Dec 25 01:28 UTC │
	│ unpause │ -p embed-certs-980842 --alsologtostderr -v=1                                                                                                                                                                                                             │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:28 UTC │ 22 Dec 25 01:28 UTC │
	│ delete  │ -p embed-certs-980842                                                                                                                                                                                                                                    │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:28 UTC │ 22 Dec 25 01:28 UTC │
	│ delete  │ -p embed-certs-980842                                                                                                                                                                                                                                    │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:28 UTC │ 22 Dec 25 01:28 UTC │
	│ start   │ -p newest-cni-869293 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ newest-cni-869293            │ jenkins │ v1.37.0 │ 22 Dec 25 01:28 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───
──────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/22 01:28:29
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1222 01:28:29.517235 1670843 out.go:360] Setting OutFile to fd 1 ...
	I1222 01:28:29.517360 1670843 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:28:29.517375 1670843 out.go:374] Setting ErrFile to fd 2...
	I1222 01:28:29.517381 1670843 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:28:29.517635 1670843 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1395000/.minikube/bin
	I1222 01:28:29.518139 1670843 out.go:368] Setting JSON to false
	I1222 01:28:29.519021 1670843 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":115862,"bootTime":1766251047,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1222 01:28:29.519085 1670843 start.go:143] virtualization:  
	I1222 01:28:29.523165 1670843 out.go:179] * [newest-cni-869293] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1222 01:28:29.526534 1670843 out.go:179]   - MINIKUBE_LOCATION=22179
	I1222 01:28:29.526612 1670843 notify.go:221] Checking for updates...
	I1222 01:28:29.533896 1670843 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 01:28:29.537080 1670843 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-1395000/kubeconfig
	I1222 01:28:29.540168 1670843 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1395000/.minikube
	I1222 01:28:29.543253 1670843 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1222 01:28:29.546250 1670843 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 01:28:29.549849 1670843 config.go:182] Loaded profile config "no-preload-154186": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1222 01:28:29.549971 1670843 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 01:28:29.575293 1670843 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1222 01:28:29.575440 1670843 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:28:29.641901 1670843 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 01:28:29.632088848 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:28:29.642009 1670843 docker.go:319] overlay module found
	I1222 01:28:29.645181 1670843 out.go:179] * Using the docker driver based on user configuration
	I1222 01:28:29.648076 1670843 start.go:309] selected driver: docker
	I1222 01:28:29.648097 1670843 start.go:928] validating driver "docker" against <nil>
	I1222 01:28:29.648110 1670843 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 01:28:29.648868 1670843 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:28:29.705361 1670843 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 01:28:29.695866952 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:28:29.705519 1670843 start_flags.go:329] no existing cluster config was found, will generate one from the flags 
	W1222 01:28:29.705566 1670843 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1222 01:28:29.705783 1670843 start_flags.go:1014] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1222 01:28:29.708430 1670843 out.go:179] * Using Docker driver with root privileges
	I1222 01:28:29.711256 1670843 cni.go:84] Creating CNI manager for ""
	I1222 01:28:29.711321 1670843 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1222 01:28:29.711336 1670843 start_flags.go:338] Found "CNI" CNI - setting NetworkPlugin=cni
	I1222 01:28:29.711424 1670843 start.go:353] cluster config:
	{Name:newest-cni-869293 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-869293 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:28:29.714525 1670843 out.go:179] * Starting "newest-cni-869293" primary control-plane node in "newest-cni-869293" cluster
	I1222 01:28:29.717251 1670843 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1222 01:28:29.720207 1670843 out.go:179] * Pulling base image v0.0.48-1766219634-22260 ...
	I1222 01:28:29.723048 1670843 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1222 01:28:29.723095 1670843 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4
	I1222 01:28:29.723110 1670843 cache.go:65] Caching tarball of preloaded images
	I1222 01:28:29.723137 1670843 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1222 01:28:29.723197 1670843 preload.go:251] Found /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1222 01:28:29.723207 1670843 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on containerd
	I1222 01:28:29.723316 1670843 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/config.json ...
	I1222 01:28:29.723334 1670843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/config.json: {Name:mk7d2be4f8d5fd1ff0598339a0c1f4c8dc1289c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:28:29.752728 1670843 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon, skipping pull
	I1222 01:28:29.752750 1670843 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 exists in daemon, skipping load
	I1222 01:28:29.752765 1670843 cache.go:243] Successfully downloaded all kic artifacts
	I1222 01:28:29.752806 1670843 start.go:360] acquireMachinesLock for newest-cni-869293: {Name:mke3b6ee6da4cf7fd78c9e3f2e52f52ceafca24c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:28:29.752907 1670843 start.go:364] duration metric: took 85.72µs to acquireMachinesLock for "newest-cni-869293"
	I1222 01:28:29.752938 1670843 start.go:93] Provisioning new machine with config: &{Name:newest-cni-869293 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-869293 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1222 01:28:29.753004 1670843 start.go:125] createHost starting for "" (driver="docker")
	I1222 01:28:29.756462 1670843 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1222 01:28:29.756690 1670843 start.go:159] libmachine.API.Create for "newest-cni-869293" (driver="docker")
	I1222 01:28:29.756724 1670843 client.go:173] LocalClient.Create starting
	I1222 01:28:29.756801 1670843 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem
	I1222 01:28:29.756843 1670843 main.go:144] libmachine: Decoding PEM data...
	I1222 01:28:29.756861 1670843 main.go:144] libmachine: Parsing certificate...
	I1222 01:28:29.756915 1670843 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem
	I1222 01:28:29.756931 1670843 main.go:144] libmachine: Decoding PEM data...
	I1222 01:28:29.756942 1670843 main.go:144] libmachine: Parsing certificate...
	I1222 01:28:29.757315 1670843 cli_runner.go:164] Run: docker network inspect newest-cni-869293 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1222 01:28:29.774941 1670843 cli_runner.go:211] docker network inspect newest-cni-869293 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1222 01:28:29.775018 1670843 network_create.go:284] running [docker network inspect newest-cni-869293] to gather additional debugging logs...
	I1222 01:28:29.775034 1670843 cli_runner.go:164] Run: docker network inspect newest-cni-869293
	W1222 01:28:29.790350 1670843 cli_runner.go:211] docker network inspect newest-cni-869293 returned with exit code 1
	I1222 01:28:29.790377 1670843 network_create.go:287] error running [docker network inspect newest-cni-869293]: docker network inspect newest-cni-869293: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-869293 not found
	I1222 01:28:29.790389 1670843 network_create.go:289] output of [docker network inspect newest-cni-869293]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-869293 not found
	
	** /stderr **
	I1222 01:28:29.790489 1670843 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 01:28:29.811617 1670843 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-4235b17e4b35 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:fe:6e:e2:e5:a2:3e} reservation:<nil>}
	I1222 01:28:29.811973 1670843 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-f8ef64c3e2a9 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:e2:c4:2a:ab:ba:65} reservation:<nil>}
	I1222 01:28:29.812349 1670843 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-abf79bf53636 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:c2:97:f6:b2:21:18} reservation:<nil>}
	I1222 01:28:29.812832 1670843 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019c3f50}
	I1222 01:28:29.812860 1670843 network_create.go:124] attempt to create docker network newest-cni-869293 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1222 01:28:29.812925 1670843 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-869293 newest-cni-869293
	I1222 01:28:29.869740 1670843 network_create.go:108] docker network newest-cni-869293 192.168.76.0/24 created
	I1222 01:28:29.869775 1670843 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-869293" container
	I1222 01:28:29.869851 1670843 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1222 01:28:29.886155 1670843 cli_runner.go:164] Run: docker volume create newest-cni-869293 --label name.minikube.sigs.k8s.io=newest-cni-869293 --label created_by.minikube.sigs.k8s.io=true
	I1222 01:28:29.904602 1670843 oci.go:103] Successfully created a docker volume newest-cni-869293
	I1222 01:28:29.904703 1670843 cli_runner.go:164] Run: docker run --rm --name newest-cni-869293-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-869293 --entrypoint /usr/bin/test -v newest-cni-869293:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -d /var/lib
	I1222 01:28:30.526342 1670843 oci.go:107] Successfully prepared a docker volume newest-cni-869293
	I1222 01:28:30.526403 1670843 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1222 01:28:30.526413 1670843 kic.go:194] Starting extracting preloaded images to volume ...
	I1222 01:28:30.526485 1670843 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-869293:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -I lz4 -xf /preloaded.tar -C /extractDir
	I1222 01:28:35.482640 1670843 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-869293:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -I lz4 -xf /preloaded.tar -C /extractDir: (4.956113773s)
	I1222 01:28:35.482672 1670843 kic.go:203] duration metric: took 4.956255379s to extract preloaded images to volume ...
	W1222 01:28:35.482805 1670843 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1222 01:28:35.482926 1670843 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1222 01:28:35.547405 1670843 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-869293 --name newest-cni-869293 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-869293 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-869293 --network newest-cni-869293 --ip 192.168.76.2 --volume newest-cni-869293:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5
	I1222 01:28:35.858405 1670843 cli_runner.go:164] Run: docker container inspect newest-cni-869293 --format={{.State.Running}}
	I1222 01:28:35.884175 1670843 cli_runner.go:164] Run: docker container inspect newest-cni-869293 --format={{.State.Status}}
	I1222 01:28:35.909405 1670843 cli_runner.go:164] Run: docker exec newest-cni-869293 stat /var/lib/dpkg/alternatives/iptables
	I1222 01:28:35.959724 1670843 oci.go:144] the created container "newest-cni-869293" has a running status.
	I1222 01:28:35.959757 1670843 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/newest-cni-869293/id_rsa...
	I1222 01:28:36.206517 1670843 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/newest-cni-869293/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1222 01:28:36.231277 1670843 cli_runner.go:164] Run: docker container inspect newest-cni-869293 --format={{.State.Status}}
	I1222 01:28:36.257362 1670843 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1222 01:28:36.257407 1670843 kic_runner.go:114] Args: [docker exec --privileged newest-cni-869293 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1222 01:28:36.342107 1670843 cli_runner.go:164] Run: docker container inspect newest-cni-869293 --format={{.State.Status}}
	I1222 01:28:36.366444 1670843 machine.go:94] provisionDockerMachine start ...
	I1222 01:28:36.366556 1670843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:28:36.385946 1670843 main.go:144] libmachine: Using SSH client type: native
	I1222 01:28:36.386403 1670843 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38695 <nil> <nil>}
	I1222 01:28:36.386423 1670843 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 01:28:36.387088 1670843 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:39762->127.0.0.1:38695: read: connection reset by peer
	I1222 01:28:39.522339 1670843 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-869293
	
	I1222 01:28:39.522366 1670843 ubuntu.go:182] provisioning hostname "newest-cni-869293"
	I1222 01:28:39.522451 1670843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:28:39.547399 1670843 main.go:144] libmachine: Using SSH client type: native
	I1222 01:28:39.547774 1670843 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38695 <nil> <nil>}
	I1222 01:28:39.547786 1670843 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-869293 && echo "newest-cni-869293" | sudo tee /etc/hostname
	I1222 01:28:39.694424 1670843 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-869293
	
	I1222 01:28:39.694503 1670843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:28:39.714200 1670843 main.go:144] libmachine: Using SSH client type: native
	I1222 01:28:39.714526 1670843 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38695 <nil> <nil>}
	I1222 01:28:39.714551 1670843 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-869293' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-869293/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-869293' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1222 01:28:39.850447 1670843 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 01:28:39.850500 1670843 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22179-1395000/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-1395000/.minikube}
	I1222 01:28:39.850540 1670843 ubuntu.go:190] setting up certificates
	I1222 01:28:39.850552 1670843 provision.go:84] configureAuth start
	I1222 01:28:39.850620 1670843 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-869293
	I1222 01:28:39.867785 1670843 provision.go:143] copyHostCerts
	I1222 01:28:39.867858 1670843 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.pem, removing ...
	I1222 01:28:39.867874 1670843 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.pem
	I1222 01:28:39.867957 1670843 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.pem (1082 bytes)
	I1222 01:28:39.868053 1670843 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1395000/.minikube/cert.pem, removing ...
	I1222 01:28:39.868064 1670843 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1395000/.minikube/cert.pem
	I1222 01:28:39.868091 1670843 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-1395000/.minikube/cert.pem (1123 bytes)
	I1222 01:28:39.868150 1670843 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1395000/.minikube/key.pem, removing ...
	I1222 01:28:39.868160 1670843 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1395000/.minikube/key.pem
	I1222 01:28:39.868186 1670843 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-1395000/.minikube/key.pem (1679 bytes)
	I1222 01:28:39.868234 1670843 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca-key.pem org=jenkins.newest-cni-869293 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-869293]
	I1222 01:28:40.422763 1670843 provision.go:177] copyRemoteCerts
	I1222 01:28:40.422844 1670843 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1222 01:28:40.422895 1670843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:28:40.440513 1670843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38695 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/newest-cni-869293/id_rsa Username:docker}
	I1222 01:28:40.538074 1670843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1222 01:28:40.556414 1670843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1222 01:28:40.575373 1670843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1222 01:28:40.593558 1670843 provision.go:87] duration metric: took 742.986656ms to configureAuth
	I1222 01:28:40.593586 1670843 ubuntu.go:206] setting minikube options for container-runtime
	I1222 01:28:40.593787 1670843 config.go:182] Loaded profile config "newest-cni-869293": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1222 01:28:40.593803 1670843 machine.go:97] duration metric: took 4.22733559s to provisionDockerMachine
	I1222 01:28:40.593812 1670843 client.go:176] duration metric: took 10.837081515s to LocalClient.Create
	I1222 01:28:40.593832 1670843 start.go:167] duration metric: took 10.837143899s to libmachine.API.Create "newest-cni-869293"
	I1222 01:28:40.593852 1670843 start.go:293] postStartSetup for "newest-cni-869293" (driver="docker")
	I1222 01:28:40.593867 1670843 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1222 01:28:40.593917 1670843 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1222 01:28:40.593967 1670843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:28:40.610836 1670843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38695 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/newest-cni-869293/id_rsa Username:docker}
	I1222 01:28:40.706406 1670843 ssh_runner.go:195] Run: cat /etc/os-release
	I1222 01:28:40.710036 1670843 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1222 01:28:40.710063 1670843 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1222 01:28:40.710074 1670843 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1395000/.minikube/addons for local assets ...
	I1222 01:28:40.710147 1670843 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1395000/.minikube/files for local assets ...
	I1222 01:28:40.710227 1670843 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem -> 13968642.pem in /etc/ssl/certs
	I1222 01:28:40.710335 1670843 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1222 01:28:40.718359 1670843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem --> /etc/ssl/certs/13968642.pem (1708 bytes)
	I1222 01:28:40.738021 1670843 start.go:296] duration metric: took 144.14894ms for postStartSetup
	I1222 01:28:40.738502 1670843 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-869293
	I1222 01:28:40.756061 1670843 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/config.json ...
	I1222 01:28:40.756360 1670843 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 01:28:40.756410 1670843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:28:40.773896 1670843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38695 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/newest-cni-869293/id_rsa Username:docker}
	I1222 01:28:40.867396 1670843 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1222 01:28:40.872357 1670843 start.go:128] duration metric: took 11.119339666s to createHost
	I1222 01:28:40.872384 1670843 start.go:83] releasing machines lock for "newest-cni-869293", held for 11.11946774s
	I1222 01:28:40.872459 1670843 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-869293
	I1222 01:28:40.890060 1670843 ssh_runner.go:195] Run: cat /version.json
	I1222 01:28:40.890150 1670843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:28:40.890413 1670843 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1222 01:28:40.890480 1670843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:28:40.915709 1670843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38695 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/newest-cni-869293/id_rsa Username:docker}
	I1222 01:28:40.935746 1670843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38695 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/newest-cni-869293/id_rsa Username:docker}
	I1222 01:28:41.107089 1670843 ssh_runner.go:195] Run: systemctl --version
	I1222 01:28:41.113703 1670843 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1222 01:28:41.118148 1670843 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1222 01:28:41.118236 1670843 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1222 01:28:41.147883 1670843 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1222 01:28:41.147909 1670843 start.go:496] detecting cgroup driver to use...
	I1222 01:28:41.147968 1670843 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 01:28:41.148044 1670843 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1222 01:28:41.163654 1670843 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1222 01:28:41.177388 1670843 docker.go:218] disabling cri-docker service (if available) ...
	I1222 01:28:41.177464 1670843 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1222 01:28:41.195478 1670843 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1222 01:28:41.214648 1670843 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1222 01:28:41.335255 1670843 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1222 01:28:41.459740 1670843 docker.go:234] disabling docker service ...
	I1222 01:28:41.459818 1670843 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1222 01:28:41.481154 1670843 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1222 01:28:41.495828 1670843 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1222 01:28:41.616188 1670843 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1222 01:28:41.741458 1670843 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1222 01:28:41.755996 1670843 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 01:28:41.769875 1670843 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1222 01:28:41.778912 1670843 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1222 01:28:41.787547 1670843 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1222 01:28:41.787619 1670843 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1222 01:28:41.796129 1670843 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1222 01:28:41.804747 1670843 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1222 01:28:41.813988 1670843 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1222 01:28:41.823382 1670843 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1222 01:28:41.831512 1670843 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1222 01:28:41.840674 1670843 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1222 01:28:41.849663 1670843 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1222 01:28:41.859033 1670843 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1222 01:28:41.866669 1670843 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1222 01:28:41.874404 1670843 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:28:41.996020 1670843 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1222 01:28:42.147980 1670843 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1222 01:28:42.148078 1670843 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1222 01:28:42.153206 1670843 start.go:564] Will wait 60s for crictl version
	I1222 01:28:42.153310 1670843 ssh_runner.go:195] Run: which crictl
	I1222 01:28:42.158111 1670843 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1222 01:28:42.188961 1670843 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.1
	RuntimeApiVersion:  v1
	I1222 01:28:42.189058 1670843 ssh_runner.go:195] Run: containerd --version
	I1222 01:28:42.212951 1670843 ssh_runner.go:195] Run: containerd --version
	I1222 01:28:42.242832 1670843 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.1 ...
	I1222 01:28:42.245962 1670843 cli_runner.go:164] Run: docker network inspect newest-cni-869293 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 01:28:42.263401 1670843 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1222 01:28:42.267705 1670843 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 01:28:42.281779 1670843 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1222 01:28:42.284630 1670843 kubeadm.go:884] updating cluster {Name:newest-cni-869293 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-869293 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1222 01:28:42.284796 1670843 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1222 01:28:42.284882 1670843 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 01:28:42.315646 1670843 containerd.go:627] all images are preloaded for containerd runtime.
	I1222 01:28:42.315686 1670843 containerd.go:534] Images already preloaded, skipping extraction
	I1222 01:28:42.315760 1670843 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 01:28:42.343479 1670843 containerd.go:627] all images are preloaded for containerd runtime.
	I1222 01:28:42.343504 1670843 cache_images.go:86] Images are preloaded, skipping loading
	I1222 01:28:42.343513 1670843 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-rc.1 containerd true true} ...
	I1222 01:28:42.343653 1670843 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-869293 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-869293 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1222 01:28:42.343729 1670843 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
	I1222 01:28:42.368317 1670843 cni.go:84] Creating CNI manager for ""
	I1222 01:28:42.368388 1670843 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1222 01:28:42.368427 1670843 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1222 01:28:42.368461 1670843 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-869293 NodeName:newest-cni-869293 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1222 01:28:42.368600 1670843 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-869293"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1222 01:28:42.368678 1670843 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1222 01:28:42.376805 1670843 binaries.go:51] Found k8s binaries, skipping transfer
	I1222 01:28:42.376907 1670843 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1222 01:28:42.384913 1670843 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1222 01:28:42.398203 1670843 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1222 01:28:42.412066 1670843 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2233 bytes)
	I1222 01:28:42.425146 1670843 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1222 01:28:42.428711 1670843 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 01:28:42.438586 1670843 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:28:42.581997 1670843 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 01:28:42.598481 1670843 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293 for IP: 192.168.76.2
	I1222 01:28:42.598505 1670843 certs.go:195] generating shared ca certs ...
	I1222 01:28:42.598523 1670843 certs.go:227] acquiring lock for ca certs: {Name:mk4e1172e73c8d9b926824a39d7e920772302ed7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:28:42.598712 1670843 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.key
	I1222 01:28:42.598780 1670843 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.key
	I1222 01:28:42.598795 1670843 certs.go:257] generating profile certs ...
	I1222 01:28:42.598868 1670843 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/client.key
	I1222 01:28:42.598888 1670843 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/client.crt with IP's: []
	I1222 01:28:43.368024 1670843 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/client.crt ...
	I1222 01:28:43.368059 1670843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/client.crt: {Name:mkfc3a338fdb42add5491ce4694522898b79b83c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:28:43.368262 1670843 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/client.key ...
	I1222 01:28:43.368276 1670843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/client.key: {Name:mkea74dd50bc644b440bafb99fc54190912b7665 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:28:43.368378 1670843 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/apiserver.key.70db33ce
	I1222 01:28:43.368397 1670843 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/apiserver.crt.70db33ce with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1222 01:28:43.608821 1670843 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/apiserver.crt.70db33ce ...
	I1222 01:28:43.608852 1670843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/apiserver.crt.70db33ce: {Name:mk0db6b3e8c9bf7aff940b44fd05b130d9d585d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:28:43.609048 1670843 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/apiserver.key.70db33ce ...
	I1222 01:28:43.609064 1670843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/apiserver.key.70db33ce: {Name:mk9a3f763ae7146332940a9b4d9169402652e2d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:28:43.609162 1670843 certs.go:382] copying /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/apiserver.crt.70db33ce -> /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/apiserver.crt
	I1222 01:28:43.609244 1670843 certs.go:386] copying /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/apiserver.key.70db33ce -> /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/apiserver.key
	I1222 01:28:43.609298 1670843 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/proxy-client.key
	I1222 01:28:43.609318 1670843 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/proxy-client.crt with IP's: []
	I1222 01:28:43.826780 1670843 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/proxy-client.crt ...
	I1222 01:28:43.826812 1670843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/proxy-client.crt: {Name:mk2b8cedfe513097eb57f8b68379ebde37c90b21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:28:43.827666 1670843 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/proxy-client.key ...
	I1222 01:28:43.827685 1670843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/proxy-client.key: {Name:mk5be442e41d6696d708120ad1b125b0231d124b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:28:43.827919 1670843 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/1396864.pem (1338 bytes)
	W1222 01:28:43.827970 1670843 certs.go:480] ignoring /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/1396864_empty.pem, impossibly tiny 0 bytes
	I1222 01:28:43.827983 1670843 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca-key.pem (1675 bytes)
	I1222 01:28:43.828011 1670843 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem (1082 bytes)
	I1222 01:28:43.828040 1670843 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem (1123 bytes)
	I1222 01:28:43.828064 1670843 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/key.pem (1679 bytes)
	I1222 01:28:43.828110 1670843 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem (1708 bytes)
	I1222 01:28:43.828781 1670843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1222 01:28:43.849273 1670843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1222 01:28:43.869748 1670843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1222 01:28:43.888552 1670843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1222 01:28:43.907953 1670843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1222 01:28:43.926293 1670843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1222 01:28:43.944116 1670843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1222 01:28:43.961772 1670843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1222 01:28:43.979659 1670843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1222 01:28:44.001330 1670843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/1396864.pem --> /usr/share/ca-certificates/1396864.pem (1338 bytes)
	I1222 01:28:44.024142 1670843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem --> /usr/share/ca-certificates/13968642.pem (1708 bytes)
	I1222 01:28:44.045144 1670843 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1222 01:28:44.059777 1670843 ssh_runner.go:195] Run: openssl version
	I1222 01:28:44.066272 1670843 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:28:44.074265 1670843 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1222 01:28:44.081820 1670843 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:28:44.085681 1670843 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 00:04 /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:28:44.085753 1670843 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:28:44.127374 1670843 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1222 01:28:44.135158 1670843 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1222 01:28:44.143793 1670843 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1396864.pem
	I1222 01:28:44.151667 1670843 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1396864.pem /etc/ssl/certs/1396864.pem
	I1222 01:28:44.159424 1670843 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1396864.pem
	I1222 01:28:44.163331 1670843 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 00:13 /usr/share/ca-certificates/1396864.pem
	I1222 01:28:44.163415 1670843 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1396864.pem
	I1222 01:28:44.204528 1670843 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1222 01:28:44.212004 1670843 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1396864.pem /etc/ssl/certs/51391683.0
	I1222 01:28:44.219747 1670843 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/13968642.pem
	I1222 01:28:44.227544 1670843 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/13968642.pem /etc/ssl/certs/13968642.pem
	I1222 01:28:44.235151 1670843 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13968642.pem
	I1222 01:28:44.239441 1670843 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 00:13 /usr/share/ca-certificates/13968642.pem
	I1222 01:28:44.239512 1670843 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13968642.pem
	I1222 01:28:44.281186 1670843 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1222 01:28:44.288873 1670843 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/13968642.pem /etc/ssl/certs/3ec20f2e.0
	I1222 01:28:44.296839 1670843 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 01:28:44.300500 1670843 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1222 01:28:44.300563 1670843 kubeadm.go:401] StartCluster: {Name:newest-cni-869293 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-869293 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:28:44.300658 1670843 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1222 01:28:44.300723 1670843 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 01:28:44.328761 1670843 cri.go:96] found id: ""
	I1222 01:28:44.328834 1670843 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1222 01:28:44.336639 1670843 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1222 01:28:44.344992 1670843 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1222 01:28:44.345060 1670843 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 01:28:44.353083 1670843 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1222 01:28:44.353102 1670843 kubeadm.go:158] found existing configuration files:
	
	I1222 01:28:44.353165 1670843 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1222 01:28:44.361047 1670843 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1222 01:28:44.361129 1670843 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1222 01:28:44.368921 1670843 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1222 01:28:44.377002 1670843 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1222 01:28:44.377072 1670843 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 01:28:44.384992 1670843 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1222 01:28:44.393081 1670843 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1222 01:28:44.393153 1670843 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 01:28:44.400779 1670843 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1222 01:28:44.408653 1670843 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1222 01:28:44.408723 1670843 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 01:28:44.416474 1670843 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1222 01:28:44.452812 1670843 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1222 01:28:44.452877 1670843 kubeadm.go:319] [preflight] Running pre-flight checks
	I1222 01:28:44.531198 1670843 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1222 01:28:44.531307 1670843 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1222 01:28:44.531355 1670843 kubeadm.go:319] OS: Linux
	I1222 01:28:44.531413 1670843 kubeadm.go:319] CGROUPS_CPU: enabled
	I1222 01:28:44.531466 1670843 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1222 01:28:44.531524 1670843 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1222 01:28:44.531590 1670843 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1222 01:28:44.531650 1670843 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1222 01:28:44.531733 1670843 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1222 01:28:44.531797 1670843 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1222 01:28:44.531864 1670843 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1222 01:28:44.531927 1670843 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1222 01:28:44.604053 1670843 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1222 01:28:44.604174 1670843 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1222 01:28:44.604270 1670843 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1222 01:28:44.614524 1670843 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1222 01:28:44.617569 1670843 out.go:252]   - Generating certificates and keys ...
	I1222 01:28:44.617679 1670843 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1222 01:28:44.617781 1670843 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1222 01:28:44.891888 1670843 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1222 01:28:45.152805 1670843 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1222 01:28:45.274684 1670843 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1222 01:28:45.517117 1670843 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1222 01:28:45.648291 1670843 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1222 01:28:45.648639 1670843 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-869293] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1222 01:28:45.933782 1670843 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1222 01:28:45.933931 1670843 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-869293] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1222 01:28:46.072331 1670843 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1222 01:28:46.408818 1670843 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1222 01:28:46.502126 1670843 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1222 01:28:46.502542 1670843 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1222 01:28:46.995871 1670843 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1222 01:28:47.191545 1670843 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1222 01:28:47.264763 1670843 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1222 01:28:47.533721 1670843 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1222 01:28:47.788353 1670843 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1222 01:28:47.789301 1670843 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1222 01:28:47.796939 1670843 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1222 01:28:47.800747 1670843 out.go:252]   - Booting up control plane ...
	I1222 01:28:47.800866 1670843 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1222 01:28:47.807894 1670843 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1222 01:28:47.807975 1670843 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1222 01:28:47.826994 1670843 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1222 01:28:47.827460 1670843 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1222 01:28:47.835465 1670843 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1222 01:28:47.835861 1670843 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1222 01:28:47.836121 1670843 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1222 01:28:47.973178 1670843 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1222 01:28:47.973389 1670843 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1222 01:30:31.528083 1661698 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001198283s
	I1222 01:30:31.528113 1661698 kubeadm.go:319] 
	I1222 01:30:31.528168 1661698 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1222 01:30:31.528204 1661698 kubeadm.go:319] 	- The kubelet is not running
	I1222 01:30:31.528304 1661698 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1222 01:30:31.528309 1661698 kubeadm.go:319] 
	I1222 01:30:31.528414 1661698 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1222 01:30:31.528445 1661698 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1222 01:30:31.528475 1661698 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1222 01:30:31.528479 1661698 kubeadm.go:319] 
	I1222 01:30:31.534468 1661698 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1222 01:30:31.534939 1661698 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1222 01:30:31.535063 1661698 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1222 01:30:31.535323 1661698 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1222 01:30:31.535332 1661698 kubeadm.go:319] 
	I1222 01:30:31.535406 1661698 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1222 01:30:31.535527 1661698 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-154186] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-154186] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001198283s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1222 01:30:31.535610 1661698 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1222 01:30:31.944583 1661698 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 01:30:31.959295 1661698 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1222 01:30:31.959366 1661698 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 01:30:31.967786 1661698 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1222 01:30:31.967809 1661698 kubeadm.go:158] found existing configuration files:
	
	I1222 01:30:31.967875 1661698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1222 01:30:31.976572 1661698 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1222 01:30:31.976645 1661698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1222 01:30:31.984566 1661698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1222 01:30:31.995193 1661698 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1222 01:30:31.995287 1661698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 01:30:32.006676 1661698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1222 01:30:32.018592 1661698 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1222 01:30:32.018733 1661698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 01:30:32.028237 1661698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1222 01:30:32.043043 1661698 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1222 01:30:32.043172 1661698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 01:30:32.052235 1661698 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1222 01:30:32.094134 1661698 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1222 01:30:32.094474 1661698 kubeadm.go:319] [preflight] Running pre-flight checks
	I1222 01:30:32.174573 1661698 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1222 01:30:32.174734 1661698 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1222 01:30:32.174816 1661698 kubeadm.go:319] OS: Linux
	I1222 01:30:32.174901 1661698 kubeadm.go:319] CGROUPS_CPU: enabled
	I1222 01:30:32.174991 1661698 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1222 01:30:32.175073 1661698 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1222 01:30:32.175183 1661698 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1222 01:30:32.175273 1661698 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1222 01:30:32.175356 1661698 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1222 01:30:32.175408 1661698 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1222 01:30:32.175461 1661698 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1222 01:30:32.175510 1661698 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1222 01:30:32.244754 1661698 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1222 01:30:32.244977 1661698 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1222 01:30:32.245121 1661698 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1222 01:30:32.250598 1661698 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1222 01:30:32.253515 1661698 out.go:252]   - Generating certificates and keys ...
	I1222 01:30:32.253625 1661698 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1222 01:30:32.253721 1661698 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1222 01:30:32.253819 1661698 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1222 01:30:32.253899 1661698 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1222 01:30:32.253989 1661698 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1222 01:30:32.254107 1661698 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1222 01:30:32.254400 1661698 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1222 01:30:32.254506 1661698 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1222 01:30:32.254956 1661698 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1222 01:30:32.255237 1661698 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1222 01:30:32.255491 1661698 kubeadm.go:319] [certs] Using the existing "sa" key
	I1222 01:30:32.255552 1661698 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1222 01:30:32.402631 1661698 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1222 01:30:32.599258 1661698 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1222 01:30:33.036089 1661698 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1222 01:30:33.328680 1661698 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1222 01:30:33.401037 1661698 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1222 01:30:33.401569 1661698 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1222 01:30:33.404184 1661698 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1222 01:30:33.407507 1661698 out.go:252]   - Booting up control plane ...
	I1222 01:30:33.407615 1661698 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1222 01:30:33.407700 1661698 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1222 01:30:33.407772 1661698 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1222 01:30:33.430782 1661698 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1222 01:30:33.431319 1661698 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1222 01:30:33.439215 1661698 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1222 01:30:33.439558 1661698 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1222 01:30:33.439606 1661698 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1222 01:30:33.604011 1661698 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1222 01:30:33.604133 1661698 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1222 01:32:47.973116 1670843 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000283341s
	I1222 01:32:47.973157 1670843 kubeadm.go:319] 
	I1222 01:32:47.973324 1670843 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1222 01:32:47.973386 1670843 kubeadm.go:319] 	- The kubelet is not running
	I1222 01:32:47.973953 1670843 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1222 01:32:47.974135 1670843 kubeadm.go:319] 
	I1222 01:32:47.974333 1670843 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1222 01:32:47.974391 1670843 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1222 01:32:47.974447 1670843 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1222 01:32:47.974452 1670843 kubeadm.go:319] 
	I1222 01:32:47.979596 1670843 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1222 01:32:47.980069 1670843 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1222 01:32:47.980186 1670843 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1222 01:32:47.980594 1670843 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1222 01:32:47.980620 1670843 kubeadm.go:319] 
	I1222 01:32:47.980717 1670843 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1222 01:32:47.980850 1670843 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-869293] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-869293] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000283341s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1222 01:32:47.980937 1670843 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1222 01:32:48.391513 1670843 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 01:32:48.405347 1670843 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1222 01:32:48.405424 1670843 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 01:32:48.413621 1670843 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1222 01:32:48.413642 1670843 kubeadm.go:158] found existing configuration files:
	
	I1222 01:32:48.413694 1670843 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1222 01:32:48.421650 1670843 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1222 01:32:48.421714 1670843 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1222 01:32:48.429403 1670843 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1222 01:32:48.437071 1670843 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1222 01:32:48.437146 1670843 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 01:32:48.444785 1670843 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1222 01:32:48.452627 1670843 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1222 01:32:48.452694 1670843 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 01:32:48.460251 1670843 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1222 01:32:48.468521 1670843 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1222 01:32:48.468599 1670843 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 01:32:48.476496 1670843 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1222 01:32:48.517508 1670843 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1222 01:32:48.517575 1670843 kubeadm.go:319] [preflight] Running pre-flight checks
	I1222 01:32:48.594935 1670843 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1222 01:32:48.595008 1670843 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1222 01:32:48.595046 1670843 kubeadm.go:319] OS: Linux
	I1222 01:32:48.595095 1670843 kubeadm.go:319] CGROUPS_CPU: enabled
	I1222 01:32:48.595143 1670843 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1222 01:32:48.595192 1670843 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1222 01:32:48.595240 1670843 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1222 01:32:48.595288 1670843 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1222 01:32:48.595340 1670843 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1222 01:32:48.595387 1670843 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1222 01:32:48.595435 1670843 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1222 01:32:48.595482 1670843 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1222 01:32:48.660826 1670843 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1222 01:32:48.660943 1670843 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1222 01:32:48.661079 1670843 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1222 01:32:48.666682 1670843 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1222 01:32:48.672026 1670843 out.go:252]   - Generating certificates and keys ...
	I1222 01:32:48.672133 1670843 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1222 01:32:48.672215 1670843 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1222 01:32:48.672316 1670843 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1222 01:32:48.672398 1670843 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1222 01:32:48.672480 1670843 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1222 01:32:48.672546 1670843 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1222 01:32:48.672621 1670843 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1222 01:32:48.672694 1670843 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1222 01:32:48.672781 1670843 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1222 01:32:48.672898 1670843 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1222 01:32:48.672968 1670843 kubeadm.go:319] [certs] Using the existing "sa" key
	I1222 01:32:48.673051 1670843 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1222 01:32:48.931141 1670843 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1222 01:32:49.321960 1670843 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1222 01:32:49.787743 1670843 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1222 01:32:49.993441 1670843 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1222 01:32:50.084543 1670843 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1222 01:32:50.085011 1670843 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1222 01:32:50.087783 1670843 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1222 01:32:50.091167 1670843 out.go:252]   - Booting up control plane ...
	I1222 01:32:50.091277 1670843 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1222 01:32:50.091357 1670843 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1222 01:32:50.091431 1670843 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1222 01:32:50.113375 1670843 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1222 01:32:50.113487 1670843 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1222 01:32:50.122587 1670843 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1222 01:32:50.123919 1670843 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1222 01:32:50.124082 1670843 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1222 01:32:50.256676 1670843 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1222 01:32:50.256808 1670843 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1222 01:34:33.605118 1661698 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001202243s
	I1222 01:34:33.610423 1661698 kubeadm.go:319] 
	I1222 01:34:33.610510 1661698 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1222 01:34:33.610555 1661698 kubeadm.go:319] 	- The kubelet is not running
	I1222 01:34:33.610673 1661698 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1222 01:34:33.610685 1661698 kubeadm.go:319] 
	I1222 01:34:33.610798 1661698 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1222 01:34:33.610837 1661698 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1222 01:34:33.610875 1661698 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1222 01:34:33.610884 1661698 kubeadm.go:319] 
	I1222 01:34:33.611729 1661698 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1222 01:34:33.612160 1661698 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1222 01:34:33.612286 1661698 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1222 01:34:33.612615 1661698 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1222 01:34:33.612639 1661698 kubeadm.go:319] 
	I1222 01:34:33.612738 1661698 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1222 01:34:33.612826 1661698 kubeadm.go:403] duration metric: took 8m6.521308561s to StartCluster
	I1222 01:34:33.612869 1661698 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:34:33.612963 1661698 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:34:33.638994 1661698 cri.go:96] found id: ""
	I1222 01:34:33.639065 1661698 logs.go:282] 0 containers: []
	W1222 01:34:33.639100 1661698 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:34:33.639124 1661698 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:34:33.639214 1661698 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:34:33.664403 1661698 cri.go:96] found id: ""
	I1222 01:34:33.664427 1661698 logs.go:282] 0 containers: []
	W1222 01:34:33.664436 1661698 logs.go:284] No container was found matching "etcd"
	I1222 01:34:33.664446 1661698 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:34:33.664509 1661698 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:34:33.695712 1661698 cri.go:96] found id: ""
	I1222 01:34:33.695738 1661698 logs.go:282] 0 containers: []
	W1222 01:34:33.695748 1661698 logs.go:284] No container was found matching "coredns"
	I1222 01:34:33.695754 1661698 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:34:33.695824 1661698 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:34:33.725832 1661698 cri.go:96] found id: ""
	I1222 01:34:33.725860 1661698 logs.go:282] 0 containers: []
	W1222 01:34:33.725869 1661698 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:34:33.725877 1661698 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:34:33.725946 1661698 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:34:33.754495 1661698 cri.go:96] found id: ""
	I1222 01:34:33.754525 1661698 logs.go:282] 0 containers: []
	W1222 01:34:33.754545 1661698 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:34:33.754568 1661698 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:34:33.754673 1661698 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:34:33.781930 1661698 cri.go:96] found id: ""
	I1222 01:34:33.781958 1661698 logs.go:282] 0 containers: []
	W1222 01:34:33.781967 1661698 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:34:33.781974 1661698 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:34:33.782035 1661698 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:34:33.809335 1661698 cri.go:96] found id: ""
	I1222 01:34:33.809412 1661698 logs.go:282] 0 containers: []
	W1222 01:34:33.809435 1661698 logs.go:284] No container was found matching "kindnet"
	I1222 01:34:33.809461 1661698 logs.go:123] Gathering logs for kubelet ...
	I1222 01:34:33.809500 1661698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:34:33.870551 1661698 logs.go:123] Gathering logs for dmesg ...
	I1222 01:34:33.870590 1661698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:34:33.887403 1661698 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:34:33.887432 1661698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:34:33.956483 1661698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:34:33.946406    5431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:34:33.946913    5431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:34:33.948611    5431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:34:33.949424    5431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:34:33.951490    5431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:34:33.946406    5431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:34:33.946913    5431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:34:33.948611    5431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:34:33.949424    5431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:34:33.951490    5431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:34:33.956566 1661698 logs.go:123] Gathering logs for containerd ...
	I1222 01:34:33.956596 1661698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:34:34.001866 1661698 logs.go:123] Gathering logs for container status ...
	I1222 01:34:34.001912 1661698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 01:34:34.042014 1661698 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001202243s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1222 01:34:34.042151 1661698 out.go:285] * 
	W1222 01:34:34.042248 1661698 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001202243s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1222 01:34:34.042299 1661698 out.go:285] * 
	W1222 01:34:34.044850 1661698 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1222 01:34:34.050486 1661698 out.go:203] 
	W1222 01:34:34.053486 1661698 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001202243s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1222 01:34:34.053539 1661698 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1222 01:34:34.053562 1661698 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1222 01:34:34.056784 1661698 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 22 01:26:14 no-preload-154186 containerd[756]: time="2025-12-22T01:26:14.470410254Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.35.0-rc.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 22 01:26:15 no-preload-154186 containerd[756]: time="2025-12-22T01:26:15.787207960Z" level=info msg="No images store for sha256:93523640e0a56d4e8b1c8a3497b218ff0cad45dc41c5de367125514543645a73"
	Dec 22 01:26:15 no-preload-154186 containerd[756]: time="2025-12-22T01:26:15.789433827Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-rc.1\""
	Dec 22 01:26:15 no-preload-154186 containerd[756]: time="2025-12-22T01:26:15.803823042Z" level=info msg="ImageCreate event name:\"sha256:a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 22 01:26:15 no-preload-154186 containerd[756]: time="2025-12-22T01:26:15.804652777Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-rc.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 22 01:26:17 no-preload-154186 containerd[756]: time="2025-12-22T01:26:17.331340037Z" level=info msg="No images store for sha256:5e4a4fe83792bf529a4e283e09069cf50cc9882d04168a33903ed6809a492e61"
	Dec 22 01:26:17 no-preload-154186 containerd[756]: time="2025-12-22T01:26:17.334400636Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\""
	Dec 22 01:26:17 no-preload-154186 containerd[756]: time="2025-12-22T01:26:17.353661044Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 22 01:26:17 no-preload-154186 containerd[756]: time="2025-12-22T01:26:17.357804008Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 22 01:26:19 no-preload-154186 containerd[756]: time="2025-12-22T01:26:19.359874060Z" level=info msg="No images store for sha256:78d3927c747311a5af27ec923ab6d07a2c1ad9cff4754323abf6c5c08cf054a5"
	Dec 22 01:26:19 no-preload-154186 containerd[756]: time="2025-12-22T01:26:19.362802572Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\""
	Dec 22 01:26:19 no-preload-154186 containerd[756]: time="2025-12-22T01:26:19.384615077Z" level=info msg="ImageCreate event name:\"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 22 01:26:19 no-preload-154186 containerd[756]: time="2025-12-22T01:26:19.388118298Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 22 01:26:21 no-preload-154186 containerd[756]: time="2025-12-22T01:26:21.047922253Z" level=info msg="No images store for sha256:e78123e3dd3a833d4e1feffb3fc0a121f3dd689abacf9b7f8984f026b95c56ec"
	Dec 22 01:26:21 no-preload-154186 containerd[756]: time="2025-12-22T01:26:21.050246394Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.0-rc.1\""
	Dec 22 01:26:21 no-preload-154186 containerd[756]: time="2025-12-22T01:26:21.057661077Z" level=info msg="ImageCreate event name:\"sha256:7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 22 01:26:21 no-preload-154186 containerd[756]: time="2025-12-22T01:26:21.058387369Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.35.0-rc.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 22 01:26:22 no-preload-154186 containerd[756]: time="2025-12-22T01:26:22.695409612Z" level=info msg="No images store for sha256:90c4ca45066b118d6cc8f6102ba2fea77739b71c04f0bdafeef225127738ea35"
	Dec 22 01:26:22 no-preload-154186 containerd[756]: time="2025-12-22T01:26:22.698274402Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-rc.1\""
	Dec 22 01:26:22 no-preload-154186 containerd[756]: time="2025-12-22T01:26:22.719228692Z" level=info msg="ImageCreate event name:\"sha256:3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 22 01:26:22 no-preload-154186 containerd[756]: time="2025-12-22T01:26:22.720159212Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-rc.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 22 01:26:23 no-preload-154186 containerd[756]: time="2025-12-22T01:26:23.239290626Z" level=info msg="No images store for sha256:7475c7d18769df89a804d5bebf679dbf94886f3626f07a2be923beaa0cc7e5b0"
	Dec 22 01:26:23 no-preload-154186 containerd[756]: time="2025-12-22T01:26:23.241741545Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\""
	Dec 22 01:26:23 no-preload-154186 containerd[756]: time="2025-12-22T01:26:23.251782100Z" level=info msg="ImageCreate event name:\"sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 22 01:26:23 no-preload-154186 containerd[756]: time="2025-12-22T01:26:23.252093930Z" level=info msg="ImageUpdate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:34:38.176852    5812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:34:38.177394    5812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:34:38.179212    5812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:34:38.179766    5812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:34:38.181652    5812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec22 00:03] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 01:34:38 up 1 day,  8:17,  0 user,  load average: 0.65, 1.24, 1.93
	Linux no-preload-154186 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 22 01:34:34 no-preload-154186 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:34:35 no-preload-154186 kubelet[5536]: E1222 01:34:35.080688    5536 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 01:34:35 no-preload-154186 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 01:34:35 no-preload-154186 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 01:34:35 no-preload-154186 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 22 01:34:35 no-preload-154186 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:34:35 no-preload-154186 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:34:35 no-preload-154186 kubelet[5580]: E1222 01:34:35.886384    5580 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 01:34:35 no-preload-154186 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 01:34:35 no-preload-154186 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 01:34:36 no-preload-154186 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Dec 22 01:34:36 no-preload-154186 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:34:36 no-preload-154186 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:34:36 no-preload-154186 kubelet[5686]: E1222 01:34:36.789929    5686 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 01:34:36 no-preload-154186 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 01:34:36 no-preload-154186 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 01:34:37 no-preload-154186 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 324.
	Dec 22 01:34:37 no-preload-154186 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:34:37 no-preload-154186 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:34:37 no-preload-154186 kubelet[5725]: E1222 01:34:37.489992    5725 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 01:34:37 no-preload-154186 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 01:34:37 no-preload-154186 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 01:34:38 no-preload-154186 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 325.
	Dec 22 01:34:38 no-preload-154186 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:34:38 no-preload-154186 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-154186 -n no-preload-154186
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-154186 -n no-preload-154186: exit status 6 (326.385311ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1222 01:34:38.636788 1679029 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-154186" does not appear in /home/jenkins/minikube-integration/22179-1395000/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "no-preload-154186" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (3.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (103.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-154186 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1222 01:34:41.428630 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/old-k8s-version-433815/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:34:54.449474 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/default-k8s-diff-port-778490/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:35:39.807293 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:35:56.758232 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-154186 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m41.502101811s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p no-preload-154186 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-154186 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-154186 describe deploy/metrics-server -n kube-system: exit status 1 (54.400203ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-154186" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-154186 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-154186
helpers_test.go:244: (dbg) docker inspect no-preload-154186:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "66e4ffc32ac4d9a593d1b758510a0770f54cbfe4fc8bd1c7d5d5625196ae8de7",
	        "Created": "2025-12-22T01:26:03.981102368Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1662042,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-22T01:26:04.182435984Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:065a636b8735485f57df2b02ed6532902f189a9c5dc304ae0ae68a778e1c9b2c",
	        "ResolvConfPath": "/var/lib/docker/containers/66e4ffc32ac4d9a593d1b758510a0770f54cbfe4fc8bd1c7d5d5625196ae8de7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/66e4ffc32ac4d9a593d1b758510a0770f54cbfe4fc8bd1c7d5d5625196ae8de7/hostname",
	        "HostsPath": "/var/lib/docker/containers/66e4ffc32ac4d9a593d1b758510a0770f54cbfe4fc8bd1c7d5d5625196ae8de7/hosts",
	        "LogPath": "/var/lib/docker/containers/66e4ffc32ac4d9a593d1b758510a0770f54cbfe4fc8bd1c7d5d5625196ae8de7/66e4ffc32ac4d9a593d1b758510a0770f54cbfe4fc8bd1c7d5d5625196ae8de7-json.log",
	        "Name": "/no-preload-154186",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-154186:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-154186",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "66e4ffc32ac4d9a593d1b758510a0770f54cbfe4fc8bd1c7d5d5625196ae8de7",
	                "LowerDir": "/var/lib/docker/overlay2/c8373d16a8e2d6ece6c54ed2fe561f951d426d306116bd1c2373e51241872490-init/diff:/var/lib/docker/overlay2/fdfc0dccbb3b6766b7981a5669907f62e120bbe774767ccbee34e6115374625f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c8373d16a8e2d6ece6c54ed2fe561f951d426d306116bd1c2373e51241872490/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c8373d16a8e2d6ece6c54ed2fe561f951d426d306116bd1c2373e51241872490/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c8373d16a8e2d6ece6c54ed2fe561f951d426d306116bd1c2373e51241872490/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-154186",
	                "Source": "/var/lib/docker/volumes/no-preload-154186/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-154186",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-154186",
	                "name.minikube.sigs.k8s.io": "no-preload-154186",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4c84f7a2a870d36246b7a801b7bf7055532e2138e424e145ab2b2ac49b81f1d2",
	            "SandboxKey": "/var/run/docker/netns/4c84f7a2a870",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38685"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38686"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38689"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38687"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38688"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-154186": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2e:f0:0c:13:3c:8d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "16496b4bf71c897a797bd98394f7b6821dd2bd1d65c81e9f1efd0fcd1f14d1a5",
	                    "EndpointID": "60d289b6dced5e2b95a24119998812f02b77a1cbd32a594dea6fb7ca62aa8c31",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-154186",
	                        "66e4ffc32ac4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-154186 -n no-preload-154186
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-154186 -n no-preload-154186: exit status 6 (309.840516ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1222 01:36:20.525652 1680799 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-154186" does not appear in /home/jenkins/minikube-integration/22179-1395000/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-154186 logs -n 25
helpers_test.go:261: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───
──────────────────┐
	│ COMMAND │                                                                                                                           ARGS                                                                                                                           │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───
──────────────────┤
	│ start   │ -p default-k8s-diff-port-778490 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-778490 │ jenkins │ v1.37.0 │ 22 Dec 25 01:24 UTC │ 22 Dec 25 01:25 UTC │
	│ image   │ old-k8s-version-433815 image list --format=json                                                                                                                                                                                                          │ old-k8s-version-433815       │ jenkins │ v1.37.0 │ 22 Dec 25 01:25 UTC │ 22 Dec 25 01:25 UTC │
	│ pause   │ -p old-k8s-version-433815 --alsologtostderr -v=1                                                                                                                                                                                                         │ old-k8s-version-433815       │ jenkins │ v1.37.0 │ 22 Dec 25 01:25 UTC │ 22 Dec 25 01:25 UTC │
	│ unpause │ -p old-k8s-version-433815 --alsologtostderr -v=1                                                                                                                                                                                                         │ old-k8s-version-433815       │ jenkins │ v1.37.0 │ 22 Dec 25 01:25 UTC │ 22 Dec 25 01:25 UTC │
	│ delete  │ -p old-k8s-version-433815                                                                                                                                                                                                                                │ old-k8s-version-433815       │ jenkins │ v1.37.0 │ 22 Dec 25 01:25 UTC │ 22 Dec 25 01:25 UTC │
	│ delete  │ -p old-k8s-version-433815                                                                                                                                                                                                                                │ old-k8s-version-433815       │ jenkins │ v1.37.0 │ 22 Dec 25 01:25 UTC │ 22 Dec 25 01:25 UTC │
	│ start   │ -p embed-certs-980842 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                                             │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:25 UTC │ 22 Dec 25 01:26 UTC │
	│ image   │ default-k8s-diff-port-778490 image list --format=json                                                                                                                                                                                                    │ default-k8s-diff-port-778490 │ jenkins │ v1.37.0 │ 22 Dec 25 01:25 UTC │ 22 Dec 25 01:25 UTC │
	│ pause   │ -p default-k8s-diff-port-778490 --alsologtostderr -v=1                                                                                                                                                                                                   │ default-k8s-diff-port-778490 │ jenkins │ v1.37.0 │ 22 Dec 25 01:25 UTC │ 22 Dec 25 01:25 UTC │
	│ unpause │ -p default-k8s-diff-port-778490 --alsologtostderr -v=1                                                                                                                                                                                                   │ default-k8s-diff-port-778490 │ jenkins │ v1.37.0 │ 22 Dec 25 01:25 UTC │ 22 Dec 25 01:25 UTC │
	│ delete  │ -p default-k8s-diff-port-778490                                                                                                                                                                                                                          │ default-k8s-diff-port-778490 │ jenkins │ v1.37.0 │ 22 Dec 25 01:25 UTC │ 22 Dec 25 01:26 UTC │
	│ delete  │ -p default-k8s-diff-port-778490                                                                                                                                                                                                                          │ default-k8s-diff-port-778490 │ jenkins │ v1.37.0 │ 22 Dec 25 01:26 UTC │ 22 Dec 25 01:26 UTC │
	│ delete  │ -p disable-driver-mounts-459348                                                                                                                                                                                                                          │ disable-driver-mounts-459348 │ jenkins │ v1.37.0 │ 22 Dec 25 01:26 UTC │ 22 Dec 25 01:26 UTC │
	│ start   │ -p no-preload-154186 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-154186            │ jenkins │ v1.37.0 │ 22 Dec 25 01:26 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-980842 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                 │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:27 UTC │ 22 Dec 25 01:27 UTC │
	│ stop    │ -p embed-certs-980842 --alsologtostderr -v=3                                                                                                                                                                                                             │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:27 UTC │ 22 Dec 25 01:27 UTC │
	│ addons  │ enable dashboard -p embed-certs-980842 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                            │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:27 UTC │ 22 Dec 25 01:27 UTC │
	│ start   │ -p embed-certs-980842 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                                             │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:27 UTC │ 22 Dec 25 01:28 UTC │
	│ image   │ embed-certs-980842 image list --format=json                                                                                                                                                                                                              │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:28 UTC │ 22 Dec 25 01:28 UTC │
	│ pause   │ -p embed-certs-980842 --alsologtostderr -v=1                                                                                                                                                                                                             │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:28 UTC │ 22 Dec 25 01:28 UTC │
	│ unpause │ -p embed-certs-980842 --alsologtostderr -v=1                                                                                                                                                                                                             │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:28 UTC │ 22 Dec 25 01:28 UTC │
	│ delete  │ -p embed-certs-980842                                                                                                                                                                                                                                    │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:28 UTC │ 22 Dec 25 01:28 UTC │
	│ delete  │ -p embed-certs-980842                                                                                                                                                                                                                                    │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:28 UTC │ 22 Dec 25 01:28 UTC │
	│ start   │ -p newest-cni-869293 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ newest-cni-869293            │ jenkins │ v1.37.0 │ 22 Dec 25 01:28 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-154186 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                  │ no-preload-154186            │ jenkins │ v1.37.0 │ 22 Dec 25 01:34 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───
──────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/22 01:28:29
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1222 01:28:29.517235 1670843 out.go:360] Setting OutFile to fd 1 ...
	I1222 01:28:29.517360 1670843 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:28:29.517375 1670843 out.go:374] Setting ErrFile to fd 2...
	I1222 01:28:29.517381 1670843 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:28:29.517635 1670843 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1395000/.minikube/bin
	I1222 01:28:29.518139 1670843 out.go:368] Setting JSON to false
	I1222 01:28:29.519021 1670843 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":115862,"bootTime":1766251047,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1222 01:28:29.519085 1670843 start.go:143] virtualization:  
	I1222 01:28:29.523165 1670843 out.go:179] * [newest-cni-869293] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1222 01:28:29.526534 1670843 out.go:179]   - MINIKUBE_LOCATION=22179
	I1222 01:28:29.526612 1670843 notify.go:221] Checking for updates...
	I1222 01:28:29.533896 1670843 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 01:28:29.537080 1670843 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-1395000/kubeconfig
	I1222 01:28:29.540168 1670843 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1395000/.minikube
	I1222 01:28:29.543253 1670843 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1222 01:28:29.546250 1670843 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 01:28:29.549849 1670843 config.go:182] Loaded profile config "no-preload-154186": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1222 01:28:29.549971 1670843 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 01:28:29.575293 1670843 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1222 01:28:29.575440 1670843 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:28:29.641901 1670843 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 01:28:29.632088848 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:28:29.642009 1670843 docker.go:319] overlay module found
	I1222 01:28:29.645181 1670843 out.go:179] * Using the docker driver based on user configuration
	I1222 01:28:29.648076 1670843 start.go:309] selected driver: docker
	I1222 01:28:29.648097 1670843 start.go:928] validating driver "docker" against <nil>
	I1222 01:28:29.648110 1670843 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 01:28:29.648868 1670843 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:28:29.705361 1670843 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 01:28:29.695866952 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:28:29.705519 1670843 start_flags.go:329] no existing cluster config was found, will generate one from the flags 
	W1222 01:28:29.705566 1670843 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1222 01:28:29.705783 1670843 start_flags.go:1014] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1222 01:28:29.708430 1670843 out.go:179] * Using Docker driver with root privileges
	I1222 01:28:29.711256 1670843 cni.go:84] Creating CNI manager for ""
	I1222 01:28:29.711321 1670843 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1222 01:28:29.711336 1670843 start_flags.go:338] Found "CNI" CNI - setting NetworkPlugin=cni
	I1222 01:28:29.711424 1670843 start.go:353] cluster config:
	{Name:newest-cni-869293 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-869293 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:28:29.714525 1670843 out.go:179] * Starting "newest-cni-869293" primary control-plane node in "newest-cni-869293" cluster
	I1222 01:28:29.717251 1670843 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1222 01:28:29.720207 1670843 out.go:179] * Pulling base image v0.0.48-1766219634-22260 ...
	I1222 01:28:29.723048 1670843 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1222 01:28:29.723095 1670843 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4
	I1222 01:28:29.723110 1670843 cache.go:65] Caching tarball of preloaded images
	I1222 01:28:29.723137 1670843 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1222 01:28:29.723197 1670843 preload.go:251] Found /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1222 01:28:29.723207 1670843 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on containerd
	I1222 01:28:29.723316 1670843 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/config.json ...
	I1222 01:28:29.723334 1670843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/config.json: {Name:mk7d2be4f8d5fd1ff0598339a0c1f4c8dc1289c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:28:29.752728 1670843 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon, skipping pull
	I1222 01:28:29.752750 1670843 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 exists in daemon, skipping load
	I1222 01:28:29.752765 1670843 cache.go:243] Successfully downloaded all kic artifacts
	I1222 01:28:29.752806 1670843 start.go:360] acquireMachinesLock for newest-cni-869293: {Name:mke3b6ee6da4cf7fd78c9e3f2e52f52ceafca24c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:28:29.752907 1670843 start.go:364] duration metric: took 85.72µs to acquireMachinesLock for "newest-cni-869293"
	I1222 01:28:29.752938 1670843 start.go:93] Provisioning new machine with config: &{Name:newest-cni-869293 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-869293 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1222 01:28:29.753004 1670843 start.go:125] createHost starting for "" (driver="docker")
	I1222 01:28:29.756462 1670843 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1222 01:28:29.756690 1670843 start.go:159] libmachine.API.Create for "newest-cni-869293" (driver="docker")
	I1222 01:28:29.756724 1670843 client.go:173] LocalClient.Create starting
	I1222 01:28:29.756801 1670843 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem
	I1222 01:28:29.756843 1670843 main.go:144] libmachine: Decoding PEM data...
	I1222 01:28:29.756861 1670843 main.go:144] libmachine: Parsing certificate...
	I1222 01:28:29.756915 1670843 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem
	I1222 01:28:29.756931 1670843 main.go:144] libmachine: Decoding PEM data...
	I1222 01:28:29.756942 1670843 main.go:144] libmachine: Parsing certificate...
	I1222 01:28:29.757315 1670843 cli_runner.go:164] Run: docker network inspect newest-cni-869293 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1222 01:28:29.774941 1670843 cli_runner.go:211] docker network inspect newest-cni-869293 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1222 01:28:29.775018 1670843 network_create.go:284] running [docker network inspect newest-cni-869293] to gather additional debugging logs...
	I1222 01:28:29.775034 1670843 cli_runner.go:164] Run: docker network inspect newest-cni-869293
	W1222 01:28:29.790350 1670843 cli_runner.go:211] docker network inspect newest-cni-869293 returned with exit code 1
	I1222 01:28:29.790377 1670843 network_create.go:287] error running [docker network inspect newest-cni-869293]: docker network inspect newest-cni-869293: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-869293 not found
	I1222 01:28:29.790389 1670843 network_create.go:289] output of [docker network inspect newest-cni-869293]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-869293 not found
	
	** /stderr **
	I1222 01:28:29.790489 1670843 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 01:28:29.811617 1670843 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-4235b17e4b35 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:fe:6e:e2:e5:a2:3e} reservation:<nil>}
	I1222 01:28:29.811973 1670843 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-f8ef64c3e2a9 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:e2:c4:2a:ab:ba:65} reservation:<nil>}
	I1222 01:28:29.812349 1670843 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-abf79bf53636 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:c2:97:f6:b2:21:18} reservation:<nil>}
	I1222 01:28:29.812832 1670843 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019c3f50}
	I1222 01:28:29.812860 1670843 network_create.go:124] attempt to create docker network newest-cni-869293 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1222 01:28:29.812925 1670843 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-869293 newest-cni-869293
	I1222 01:28:29.869740 1670843 network_create.go:108] docker network newest-cni-869293 192.168.76.0/24 created
	I1222 01:28:29.869775 1670843 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-869293" container
	I1222 01:28:29.869851 1670843 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1222 01:28:29.886155 1670843 cli_runner.go:164] Run: docker volume create newest-cni-869293 --label name.minikube.sigs.k8s.io=newest-cni-869293 --label created_by.minikube.sigs.k8s.io=true
	I1222 01:28:29.904602 1670843 oci.go:103] Successfully created a docker volume newest-cni-869293
	I1222 01:28:29.904703 1670843 cli_runner.go:164] Run: docker run --rm --name newest-cni-869293-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-869293 --entrypoint /usr/bin/test -v newest-cni-869293:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -d /var/lib
	I1222 01:28:30.526342 1670843 oci.go:107] Successfully prepared a docker volume newest-cni-869293
	I1222 01:28:30.526403 1670843 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1222 01:28:30.526413 1670843 kic.go:194] Starting extracting preloaded images to volume ...
	I1222 01:28:30.526485 1670843 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-869293:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -I lz4 -xf /preloaded.tar -C /extractDir
	I1222 01:28:35.482640 1670843 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-869293:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -I lz4 -xf /preloaded.tar -C /extractDir: (4.956113773s)
	I1222 01:28:35.482672 1670843 kic.go:203] duration metric: took 4.956255379s to extract preloaded images to volume ...
	W1222 01:28:35.482805 1670843 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1222 01:28:35.482926 1670843 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1222 01:28:35.547405 1670843 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-869293 --name newest-cni-869293 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-869293 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-869293 --network newest-cni-869293 --ip 192.168.76.2 --volume newest-cni-869293:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5
	I1222 01:28:35.858405 1670843 cli_runner.go:164] Run: docker container inspect newest-cni-869293 --format={{.State.Running}}
	I1222 01:28:35.884175 1670843 cli_runner.go:164] Run: docker container inspect newest-cni-869293 --format={{.State.Status}}
	I1222 01:28:35.909405 1670843 cli_runner.go:164] Run: docker exec newest-cni-869293 stat /var/lib/dpkg/alternatives/iptables
	I1222 01:28:35.959724 1670843 oci.go:144] the created container "newest-cni-869293" has a running status.
	I1222 01:28:35.959757 1670843 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/newest-cni-869293/id_rsa...
	I1222 01:28:36.206517 1670843 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/newest-cni-869293/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1222 01:28:36.231277 1670843 cli_runner.go:164] Run: docker container inspect newest-cni-869293 --format={{.State.Status}}
	I1222 01:28:36.257362 1670843 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1222 01:28:36.257407 1670843 kic_runner.go:114] Args: [docker exec --privileged newest-cni-869293 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1222 01:28:36.342107 1670843 cli_runner.go:164] Run: docker container inspect newest-cni-869293 --format={{.State.Status}}
	I1222 01:28:36.366444 1670843 machine.go:94] provisionDockerMachine start ...
	I1222 01:28:36.366556 1670843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:28:36.385946 1670843 main.go:144] libmachine: Using SSH client type: native
	I1222 01:28:36.386403 1670843 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38695 <nil> <nil>}
	I1222 01:28:36.386423 1670843 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 01:28:36.387088 1670843 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:39762->127.0.0.1:38695: read: connection reset by peer
	I1222 01:28:39.522339 1670843 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-869293
	
	I1222 01:28:39.522366 1670843 ubuntu.go:182] provisioning hostname "newest-cni-869293"
	I1222 01:28:39.522451 1670843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:28:39.547399 1670843 main.go:144] libmachine: Using SSH client type: native
	I1222 01:28:39.547774 1670843 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38695 <nil> <nil>}
	I1222 01:28:39.547786 1670843 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-869293 && echo "newest-cni-869293" | sudo tee /etc/hostname
	I1222 01:28:39.694424 1670843 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-869293
	
	I1222 01:28:39.694503 1670843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:28:39.714200 1670843 main.go:144] libmachine: Using SSH client type: native
	I1222 01:28:39.714526 1670843 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38695 <nil> <nil>}
	I1222 01:28:39.714551 1670843 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-869293' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-869293/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-869293' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1222 01:28:39.850447 1670843 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 01:28:39.850500 1670843 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22179-1395000/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-1395000/.minikube}
	I1222 01:28:39.850540 1670843 ubuntu.go:190] setting up certificates
	I1222 01:28:39.850552 1670843 provision.go:84] configureAuth start
	I1222 01:28:39.850620 1670843 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-869293
	I1222 01:28:39.867785 1670843 provision.go:143] copyHostCerts
	I1222 01:28:39.867858 1670843 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.pem, removing ...
	I1222 01:28:39.867874 1670843 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.pem
	I1222 01:28:39.867957 1670843 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.pem (1082 bytes)
	I1222 01:28:39.868053 1670843 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1395000/.minikube/cert.pem, removing ...
	I1222 01:28:39.868064 1670843 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1395000/.minikube/cert.pem
	I1222 01:28:39.868091 1670843 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-1395000/.minikube/cert.pem (1123 bytes)
	I1222 01:28:39.868150 1670843 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1395000/.minikube/key.pem, removing ...
	I1222 01:28:39.868160 1670843 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1395000/.minikube/key.pem
	I1222 01:28:39.868186 1670843 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-1395000/.minikube/key.pem (1679 bytes)
	I1222 01:28:39.868234 1670843 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca-key.pem org=jenkins.newest-cni-869293 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-869293]
	I1222 01:28:40.422763 1670843 provision.go:177] copyRemoteCerts
	I1222 01:28:40.422844 1670843 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1222 01:28:40.422895 1670843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:28:40.440513 1670843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38695 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/newest-cni-869293/id_rsa Username:docker}
	I1222 01:28:40.538074 1670843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1222 01:28:40.556414 1670843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1222 01:28:40.575373 1670843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1222 01:28:40.593558 1670843 provision.go:87] duration metric: took 742.986656ms to configureAuth
	I1222 01:28:40.593586 1670843 ubuntu.go:206] setting minikube options for container-runtime
	I1222 01:28:40.593787 1670843 config.go:182] Loaded profile config "newest-cni-869293": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1222 01:28:40.593803 1670843 machine.go:97] duration metric: took 4.22733559s to provisionDockerMachine
	I1222 01:28:40.593812 1670843 client.go:176] duration metric: took 10.837081515s to LocalClient.Create
	I1222 01:28:40.593832 1670843 start.go:167] duration metric: took 10.837143899s to libmachine.API.Create "newest-cni-869293"
	I1222 01:28:40.593852 1670843 start.go:293] postStartSetup for "newest-cni-869293" (driver="docker")
	I1222 01:28:40.593867 1670843 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1222 01:28:40.593917 1670843 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1222 01:28:40.593967 1670843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:28:40.610836 1670843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38695 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/newest-cni-869293/id_rsa Username:docker}
	I1222 01:28:40.706406 1670843 ssh_runner.go:195] Run: cat /etc/os-release
	I1222 01:28:40.710036 1670843 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1222 01:28:40.710063 1670843 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1222 01:28:40.710074 1670843 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1395000/.minikube/addons for local assets ...
	I1222 01:28:40.710147 1670843 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1395000/.minikube/files for local assets ...
	I1222 01:28:40.710227 1670843 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem -> 13968642.pem in /etc/ssl/certs
	I1222 01:28:40.710335 1670843 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1222 01:28:40.718359 1670843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem --> /etc/ssl/certs/13968642.pem (1708 bytes)
	I1222 01:28:40.738021 1670843 start.go:296] duration metric: took 144.14894ms for postStartSetup
	I1222 01:28:40.738502 1670843 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-869293
	I1222 01:28:40.756061 1670843 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/config.json ...
	I1222 01:28:40.756360 1670843 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 01:28:40.756410 1670843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:28:40.773896 1670843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38695 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/newest-cni-869293/id_rsa Username:docker}
	I1222 01:28:40.867396 1670843 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1222 01:28:40.872357 1670843 start.go:128] duration metric: took 11.119339666s to createHost
	I1222 01:28:40.872384 1670843 start.go:83] releasing machines lock for "newest-cni-869293", held for 11.11946774s
	I1222 01:28:40.872459 1670843 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-869293
	I1222 01:28:40.890060 1670843 ssh_runner.go:195] Run: cat /version.json
	I1222 01:28:40.890150 1670843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:28:40.890413 1670843 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1222 01:28:40.890480 1670843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:28:40.915709 1670843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38695 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/newest-cni-869293/id_rsa Username:docker}
	I1222 01:28:40.935746 1670843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38695 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/newest-cni-869293/id_rsa Username:docker}
	I1222 01:28:41.107089 1670843 ssh_runner.go:195] Run: systemctl --version
	I1222 01:28:41.113703 1670843 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1222 01:28:41.118148 1670843 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1222 01:28:41.118236 1670843 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1222 01:28:41.147883 1670843 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1222 01:28:41.147909 1670843 start.go:496] detecting cgroup driver to use...
	I1222 01:28:41.147968 1670843 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 01:28:41.148044 1670843 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1222 01:28:41.163654 1670843 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1222 01:28:41.177388 1670843 docker.go:218] disabling cri-docker service (if available) ...
	I1222 01:28:41.177464 1670843 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1222 01:28:41.195478 1670843 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1222 01:28:41.214648 1670843 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1222 01:28:41.335255 1670843 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1222 01:28:41.459740 1670843 docker.go:234] disabling docker service ...
	I1222 01:28:41.459818 1670843 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1222 01:28:41.481154 1670843 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1222 01:28:41.495828 1670843 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1222 01:28:41.616188 1670843 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1222 01:28:41.741458 1670843 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1222 01:28:41.755996 1670843 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 01:28:41.769875 1670843 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1222 01:28:41.778912 1670843 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1222 01:28:41.787547 1670843 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1222 01:28:41.787619 1670843 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1222 01:28:41.796129 1670843 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1222 01:28:41.804747 1670843 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1222 01:28:41.813988 1670843 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1222 01:28:41.823382 1670843 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1222 01:28:41.831512 1670843 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1222 01:28:41.840674 1670843 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1222 01:28:41.849663 1670843 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1222 01:28:41.859033 1670843 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1222 01:28:41.866669 1670843 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1222 01:28:41.874404 1670843 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:28:41.996020 1670843 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1222 01:28:42.147980 1670843 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1222 01:28:42.148078 1670843 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1222 01:28:42.153206 1670843 start.go:564] Will wait 60s for crictl version
	I1222 01:28:42.153310 1670843 ssh_runner.go:195] Run: which crictl
	I1222 01:28:42.158111 1670843 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1222 01:28:42.188961 1670843 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.1
	RuntimeApiVersion:  v1
	I1222 01:28:42.189058 1670843 ssh_runner.go:195] Run: containerd --version
	I1222 01:28:42.212951 1670843 ssh_runner.go:195] Run: containerd --version
	I1222 01:28:42.242832 1670843 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.1 ...
	I1222 01:28:42.245962 1670843 cli_runner.go:164] Run: docker network inspect newest-cni-869293 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 01:28:42.263401 1670843 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1222 01:28:42.267705 1670843 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 01:28:42.281779 1670843 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1222 01:28:42.284630 1670843 kubeadm.go:884] updating cluster {Name:newest-cni-869293 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-869293 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1222 01:28:42.284796 1670843 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1222 01:28:42.284882 1670843 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 01:28:42.315646 1670843 containerd.go:627] all images are preloaded for containerd runtime.
	I1222 01:28:42.315686 1670843 containerd.go:534] Images already preloaded, skipping extraction
	I1222 01:28:42.315760 1670843 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 01:28:42.343479 1670843 containerd.go:627] all images are preloaded for containerd runtime.
	I1222 01:28:42.343504 1670843 cache_images.go:86] Images are preloaded, skipping loading
	I1222 01:28:42.343513 1670843 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-rc.1 containerd true true} ...
	I1222 01:28:42.343653 1670843 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-869293 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-869293 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1222 01:28:42.343729 1670843 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
	I1222 01:28:42.368317 1670843 cni.go:84] Creating CNI manager for ""
	I1222 01:28:42.368388 1670843 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1222 01:28:42.368427 1670843 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1222 01:28:42.368461 1670843 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-869293 NodeName:newest-cni-869293 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1222 01:28:42.368600 1670843 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-869293"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1222 01:28:42.368678 1670843 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1222 01:28:42.376805 1670843 binaries.go:51] Found k8s binaries, skipping transfer
	I1222 01:28:42.376907 1670843 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1222 01:28:42.384913 1670843 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1222 01:28:42.398203 1670843 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1222 01:28:42.412066 1670843 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2233 bytes)
	I1222 01:28:42.425146 1670843 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1222 01:28:42.428711 1670843 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 01:28:42.438586 1670843 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:28:42.581997 1670843 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 01:28:42.598481 1670843 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293 for IP: 192.168.76.2
	I1222 01:28:42.598505 1670843 certs.go:195] generating shared ca certs ...
	I1222 01:28:42.598523 1670843 certs.go:227] acquiring lock for ca certs: {Name:mk4e1172e73c8d9b926824a39d7e920772302ed7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:28:42.598712 1670843 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.key
	I1222 01:28:42.598780 1670843 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.key
	I1222 01:28:42.598795 1670843 certs.go:257] generating profile certs ...
	I1222 01:28:42.598868 1670843 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/client.key
	I1222 01:28:42.598888 1670843 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/client.crt with IP's: []
	I1222 01:28:43.368024 1670843 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/client.crt ...
	I1222 01:28:43.368059 1670843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/client.crt: {Name:mkfc3a338fdb42add5491ce4694522898b79b83c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:28:43.368262 1670843 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/client.key ...
	I1222 01:28:43.368276 1670843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/client.key: {Name:mkea74dd50bc644b440bafb99fc54190912b7665 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:28:43.368378 1670843 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/apiserver.key.70db33ce
	I1222 01:28:43.368397 1670843 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/apiserver.crt.70db33ce with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1222 01:28:43.608821 1670843 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/apiserver.crt.70db33ce ...
	I1222 01:28:43.608852 1670843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/apiserver.crt.70db33ce: {Name:mk0db6b3e8c9bf7aff940b44fd05b130d9d585d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:28:43.609048 1670843 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/apiserver.key.70db33ce ...
	I1222 01:28:43.609064 1670843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/apiserver.key.70db33ce: {Name:mk9a3f763ae7146332940a9b4d9169402652e2d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:28:43.609162 1670843 certs.go:382] copying /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/apiserver.crt.70db33ce -> /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/apiserver.crt
	I1222 01:28:43.609244 1670843 certs.go:386] copying /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/apiserver.key.70db33ce -> /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/apiserver.key
	I1222 01:28:43.609298 1670843 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/proxy-client.key
	I1222 01:28:43.609318 1670843 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/proxy-client.crt with IP's: []
	I1222 01:28:43.826780 1670843 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/proxy-client.crt ...
	I1222 01:28:43.826812 1670843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/proxy-client.crt: {Name:mk2b8cedfe513097eb57f8b68379ebde37c90b21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:28:43.827666 1670843 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/proxy-client.key ...
	I1222 01:28:43.827685 1670843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/proxy-client.key: {Name:mk5be442e41d6696d708120ad1b125b0231d124b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:28:43.827919 1670843 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/1396864.pem (1338 bytes)
	W1222 01:28:43.827970 1670843 certs.go:480] ignoring /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/1396864_empty.pem, impossibly tiny 0 bytes
	I1222 01:28:43.827983 1670843 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca-key.pem (1675 bytes)
	I1222 01:28:43.828011 1670843 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem (1082 bytes)
	I1222 01:28:43.828040 1670843 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem (1123 bytes)
	I1222 01:28:43.828064 1670843 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/key.pem (1679 bytes)
	I1222 01:28:43.828110 1670843 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem (1708 bytes)
	I1222 01:28:43.828781 1670843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1222 01:28:43.849273 1670843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1222 01:28:43.869748 1670843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1222 01:28:43.888552 1670843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1222 01:28:43.907953 1670843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1222 01:28:43.926293 1670843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1222 01:28:43.944116 1670843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1222 01:28:43.961772 1670843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1222 01:28:43.979659 1670843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1222 01:28:44.001330 1670843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/1396864.pem --> /usr/share/ca-certificates/1396864.pem (1338 bytes)
	I1222 01:28:44.024142 1670843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem --> /usr/share/ca-certificates/13968642.pem (1708 bytes)
	I1222 01:28:44.045144 1670843 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1222 01:28:44.059777 1670843 ssh_runner.go:195] Run: openssl version
	I1222 01:28:44.066272 1670843 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:28:44.074265 1670843 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1222 01:28:44.081820 1670843 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:28:44.085681 1670843 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 00:04 /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:28:44.085753 1670843 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:28:44.127374 1670843 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1222 01:28:44.135158 1670843 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1222 01:28:44.143793 1670843 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1396864.pem
	I1222 01:28:44.151667 1670843 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1396864.pem /etc/ssl/certs/1396864.pem
	I1222 01:28:44.159424 1670843 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1396864.pem
	I1222 01:28:44.163331 1670843 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 00:13 /usr/share/ca-certificates/1396864.pem
	I1222 01:28:44.163415 1670843 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1396864.pem
	I1222 01:28:44.204528 1670843 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1222 01:28:44.212004 1670843 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1396864.pem /etc/ssl/certs/51391683.0
	I1222 01:28:44.219747 1670843 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/13968642.pem
	I1222 01:28:44.227544 1670843 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/13968642.pem /etc/ssl/certs/13968642.pem
	I1222 01:28:44.235151 1670843 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13968642.pem
	I1222 01:28:44.239441 1670843 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 00:13 /usr/share/ca-certificates/13968642.pem
	I1222 01:28:44.239512 1670843 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13968642.pem
	I1222 01:28:44.281186 1670843 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1222 01:28:44.288873 1670843 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/13968642.pem /etc/ssl/certs/3ec20f2e.0
	I1222 01:28:44.296839 1670843 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 01:28:44.300500 1670843 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1222 01:28:44.300563 1670843 kubeadm.go:401] StartCluster: {Name:newest-cni-869293 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-869293 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:28:44.300658 1670843 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1222 01:28:44.300723 1670843 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 01:28:44.328761 1670843 cri.go:96] found id: ""
	I1222 01:28:44.328834 1670843 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1222 01:28:44.336639 1670843 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1222 01:28:44.344992 1670843 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1222 01:28:44.345060 1670843 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 01:28:44.353083 1670843 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1222 01:28:44.353102 1670843 kubeadm.go:158] found existing configuration files:
	
	I1222 01:28:44.353165 1670843 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1222 01:28:44.361047 1670843 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1222 01:28:44.361129 1670843 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1222 01:28:44.368921 1670843 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1222 01:28:44.377002 1670843 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1222 01:28:44.377072 1670843 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 01:28:44.384992 1670843 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1222 01:28:44.393081 1670843 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1222 01:28:44.393153 1670843 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 01:28:44.400779 1670843 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1222 01:28:44.408653 1670843 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1222 01:28:44.408723 1670843 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 01:28:44.416474 1670843 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1222 01:28:44.452812 1670843 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1222 01:28:44.452877 1670843 kubeadm.go:319] [preflight] Running pre-flight checks
	I1222 01:28:44.531198 1670843 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1222 01:28:44.531307 1670843 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1222 01:28:44.531355 1670843 kubeadm.go:319] OS: Linux
	I1222 01:28:44.531413 1670843 kubeadm.go:319] CGROUPS_CPU: enabled
	I1222 01:28:44.531466 1670843 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1222 01:28:44.531524 1670843 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1222 01:28:44.531590 1670843 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1222 01:28:44.531650 1670843 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1222 01:28:44.531733 1670843 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1222 01:28:44.531797 1670843 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1222 01:28:44.531864 1670843 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1222 01:28:44.531927 1670843 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1222 01:28:44.604053 1670843 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1222 01:28:44.604174 1670843 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1222 01:28:44.604270 1670843 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1222 01:28:44.614524 1670843 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1222 01:28:44.617569 1670843 out.go:252]   - Generating certificates and keys ...
	I1222 01:28:44.617679 1670843 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1222 01:28:44.617781 1670843 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1222 01:28:44.891888 1670843 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1222 01:28:45.152805 1670843 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1222 01:28:45.274684 1670843 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1222 01:28:45.517117 1670843 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1222 01:28:45.648291 1670843 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1222 01:28:45.648639 1670843 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-869293] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1222 01:28:45.933782 1670843 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1222 01:28:45.933931 1670843 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-869293] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1222 01:28:46.072331 1670843 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1222 01:28:46.408818 1670843 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1222 01:28:46.502126 1670843 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1222 01:28:46.502542 1670843 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1222 01:28:46.995871 1670843 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1222 01:28:47.191545 1670843 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1222 01:28:47.264763 1670843 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1222 01:28:47.533721 1670843 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1222 01:28:47.788353 1670843 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1222 01:28:47.789301 1670843 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1222 01:28:47.796939 1670843 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1222 01:28:47.800747 1670843 out.go:252]   - Booting up control plane ...
	I1222 01:28:47.800866 1670843 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1222 01:28:47.807894 1670843 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1222 01:28:47.807975 1670843 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1222 01:28:47.826994 1670843 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1222 01:28:47.827460 1670843 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1222 01:28:47.835465 1670843 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1222 01:28:47.835861 1670843 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1222 01:28:47.836121 1670843 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1222 01:28:47.973178 1670843 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1222 01:28:47.973389 1670843 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1222 01:30:31.528083 1661698 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001198283s
	I1222 01:30:31.528113 1661698 kubeadm.go:319] 
	I1222 01:30:31.528168 1661698 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1222 01:30:31.528204 1661698 kubeadm.go:319] 	- The kubelet is not running
	I1222 01:30:31.528304 1661698 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1222 01:30:31.528309 1661698 kubeadm.go:319] 
	I1222 01:30:31.528414 1661698 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1222 01:30:31.528445 1661698 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1222 01:30:31.528475 1661698 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1222 01:30:31.528479 1661698 kubeadm.go:319] 
	I1222 01:30:31.534468 1661698 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1222 01:30:31.534939 1661698 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1222 01:30:31.535063 1661698 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1222 01:30:31.535323 1661698 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1222 01:30:31.535332 1661698 kubeadm.go:319] 
	I1222 01:30:31.535406 1661698 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1222 01:30:31.535527 1661698 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-154186] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-154186] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001198283s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1222 01:30:31.535610 1661698 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1222 01:30:31.944583 1661698 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 01:30:31.959295 1661698 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1222 01:30:31.959366 1661698 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 01:30:31.967786 1661698 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1222 01:30:31.967809 1661698 kubeadm.go:158] found existing configuration files:
	
	I1222 01:30:31.967875 1661698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1222 01:30:31.976572 1661698 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1222 01:30:31.976645 1661698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1222 01:30:31.984566 1661698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1222 01:30:31.995193 1661698 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1222 01:30:31.995287 1661698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 01:30:32.006676 1661698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1222 01:30:32.018592 1661698 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1222 01:30:32.018733 1661698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 01:30:32.028237 1661698 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1222 01:30:32.043043 1661698 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1222 01:30:32.043172 1661698 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 01:30:32.052235 1661698 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1222 01:30:32.094134 1661698 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1222 01:30:32.094474 1661698 kubeadm.go:319] [preflight] Running pre-flight checks
	I1222 01:30:32.174573 1661698 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1222 01:30:32.174734 1661698 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1222 01:30:32.174816 1661698 kubeadm.go:319] OS: Linux
	I1222 01:30:32.174901 1661698 kubeadm.go:319] CGROUPS_CPU: enabled
	I1222 01:30:32.174991 1661698 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1222 01:30:32.175073 1661698 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1222 01:30:32.175183 1661698 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1222 01:30:32.175273 1661698 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1222 01:30:32.175356 1661698 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1222 01:30:32.175408 1661698 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1222 01:30:32.175461 1661698 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1222 01:30:32.175510 1661698 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1222 01:30:32.244754 1661698 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1222 01:30:32.244977 1661698 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1222 01:30:32.245121 1661698 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1222 01:30:32.250598 1661698 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1222 01:30:32.253515 1661698 out.go:252]   - Generating certificates and keys ...
	I1222 01:30:32.253625 1661698 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1222 01:30:32.253721 1661698 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1222 01:30:32.253819 1661698 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1222 01:30:32.253899 1661698 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1222 01:30:32.253989 1661698 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1222 01:30:32.254107 1661698 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1222 01:30:32.254400 1661698 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1222 01:30:32.254506 1661698 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1222 01:30:32.254956 1661698 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1222 01:30:32.255237 1661698 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1222 01:30:32.255491 1661698 kubeadm.go:319] [certs] Using the existing "sa" key
	I1222 01:30:32.255552 1661698 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1222 01:30:32.402631 1661698 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1222 01:30:32.599258 1661698 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1222 01:30:33.036089 1661698 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1222 01:30:33.328680 1661698 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1222 01:30:33.401037 1661698 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1222 01:30:33.401569 1661698 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1222 01:30:33.404184 1661698 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1222 01:30:33.407507 1661698 out.go:252]   - Booting up control plane ...
	I1222 01:30:33.407615 1661698 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1222 01:30:33.407700 1661698 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1222 01:30:33.407772 1661698 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1222 01:30:33.430782 1661698 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1222 01:30:33.431319 1661698 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1222 01:30:33.439215 1661698 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1222 01:30:33.439558 1661698 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1222 01:30:33.439606 1661698 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1222 01:30:33.604011 1661698 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1222 01:30:33.604133 1661698 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1222 01:32:47.973116 1670843 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000283341s
	I1222 01:32:47.973157 1670843 kubeadm.go:319] 
	I1222 01:32:47.973324 1670843 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1222 01:32:47.973386 1670843 kubeadm.go:319] 	- The kubelet is not running
	I1222 01:32:47.973953 1670843 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1222 01:32:47.974135 1670843 kubeadm.go:319] 
	I1222 01:32:47.974333 1670843 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1222 01:32:47.974391 1670843 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1222 01:32:47.974447 1670843 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1222 01:32:47.974452 1670843 kubeadm.go:319] 
	I1222 01:32:47.979596 1670843 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1222 01:32:47.980069 1670843 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1222 01:32:47.980186 1670843 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1222 01:32:47.980594 1670843 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1222 01:32:47.980620 1670843 kubeadm.go:319] 
	I1222 01:32:47.980717 1670843 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1222 01:32:47.980850 1670843 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-869293] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-869293] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000283341s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1222 01:32:47.980937 1670843 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1222 01:32:48.391513 1670843 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 01:32:48.405347 1670843 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1222 01:32:48.405424 1670843 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 01:32:48.413621 1670843 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1222 01:32:48.413642 1670843 kubeadm.go:158] found existing configuration files:
	
	I1222 01:32:48.413694 1670843 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1222 01:32:48.421650 1670843 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1222 01:32:48.421714 1670843 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1222 01:32:48.429403 1670843 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1222 01:32:48.437071 1670843 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1222 01:32:48.437146 1670843 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 01:32:48.444785 1670843 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1222 01:32:48.452627 1670843 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1222 01:32:48.452694 1670843 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 01:32:48.460251 1670843 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1222 01:32:48.468521 1670843 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1222 01:32:48.468599 1670843 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 01:32:48.476496 1670843 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1222 01:32:48.517508 1670843 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1222 01:32:48.517575 1670843 kubeadm.go:319] [preflight] Running pre-flight checks
	I1222 01:32:48.594935 1670843 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1222 01:32:48.595008 1670843 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1222 01:32:48.595046 1670843 kubeadm.go:319] OS: Linux
	I1222 01:32:48.595095 1670843 kubeadm.go:319] CGROUPS_CPU: enabled
	I1222 01:32:48.595143 1670843 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1222 01:32:48.595192 1670843 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1222 01:32:48.595240 1670843 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1222 01:32:48.595288 1670843 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1222 01:32:48.595340 1670843 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1222 01:32:48.595387 1670843 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1222 01:32:48.595435 1670843 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1222 01:32:48.595482 1670843 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1222 01:32:48.660826 1670843 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1222 01:32:48.660943 1670843 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1222 01:32:48.661079 1670843 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1222 01:32:48.666682 1670843 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1222 01:32:48.672026 1670843 out.go:252]   - Generating certificates and keys ...
	I1222 01:32:48.672133 1670843 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1222 01:32:48.672215 1670843 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1222 01:32:48.672316 1670843 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1222 01:32:48.672398 1670843 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1222 01:32:48.672480 1670843 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1222 01:32:48.672546 1670843 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1222 01:32:48.672621 1670843 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1222 01:32:48.672694 1670843 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1222 01:32:48.672781 1670843 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1222 01:32:48.672898 1670843 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1222 01:32:48.672968 1670843 kubeadm.go:319] [certs] Using the existing "sa" key
	I1222 01:32:48.673051 1670843 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1222 01:32:48.931141 1670843 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1222 01:32:49.321960 1670843 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1222 01:32:49.787743 1670843 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1222 01:32:49.993441 1670843 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1222 01:32:50.084543 1670843 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1222 01:32:50.085011 1670843 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1222 01:32:50.087783 1670843 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1222 01:32:50.091167 1670843 out.go:252]   - Booting up control plane ...
	I1222 01:32:50.091277 1670843 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1222 01:32:50.091357 1670843 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1222 01:32:50.091431 1670843 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1222 01:32:50.113375 1670843 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1222 01:32:50.113487 1670843 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1222 01:32:50.122587 1670843 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1222 01:32:50.123919 1670843 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1222 01:32:50.124082 1670843 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1222 01:32:50.256676 1670843 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1222 01:32:50.256808 1670843 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1222 01:34:33.605118 1661698 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001202243s
	I1222 01:34:33.610423 1661698 kubeadm.go:319] 
	I1222 01:34:33.610510 1661698 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1222 01:34:33.610555 1661698 kubeadm.go:319] 	- The kubelet is not running
	I1222 01:34:33.610673 1661698 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1222 01:34:33.610685 1661698 kubeadm.go:319] 
	I1222 01:34:33.610798 1661698 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1222 01:34:33.610837 1661698 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1222 01:34:33.610875 1661698 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1222 01:34:33.610884 1661698 kubeadm.go:319] 
	I1222 01:34:33.611729 1661698 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1222 01:34:33.612160 1661698 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1222 01:34:33.612286 1661698 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1222 01:34:33.612615 1661698 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1222 01:34:33.612639 1661698 kubeadm.go:319] 
	I1222 01:34:33.612738 1661698 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1222 01:34:33.612826 1661698 kubeadm.go:403] duration metric: took 8m6.521308561s to StartCluster
	I1222 01:34:33.612869 1661698 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:34:33.612963 1661698 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:34:33.638994 1661698 cri.go:96] found id: ""
	I1222 01:34:33.639065 1661698 logs.go:282] 0 containers: []
	W1222 01:34:33.639100 1661698 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:34:33.639124 1661698 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:34:33.639214 1661698 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:34:33.664403 1661698 cri.go:96] found id: ""
	I1222 01:34:33.664427 1661698 logs.go:282] 0 containers: []
	W1222 01:34:33.664436 1661698 logs.go:284] No container was found matching "etcd"
	I1222 01:34:33.664446 1661698 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:34:33.664509 1661698 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:34:33.695712 1661698 cri.go:96] found id: ""
	I1222 01:34:33.695738 1661698 logs.go:282] 0 containers: []
	W1222 01:34:33.695748 1661698 logs.go:284] No container was found matching "coredns"
	I1222 01:34:33.695754 1661698 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:34:33.695824 1661698 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:34:33.725832 1661698 cri.go:96] found id: ""
	I1222 01:34:33.725860 1661698 logs.go:282] 0 containers: []
	W1222 01:34:33.725869 1661698 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:34:33.725877 1661698 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:34:33.725946 1661698 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:34:33.754495 1661698 cri.go:96] found id: ""
	I1222 01:34:33.754525 1661698 logs.go:282] 0 containers: []
	W1222 01:34:33.754545 1661698 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:34:33.754568 1661698 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:34:33.754673 1661698 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:34:33.781930 1661698 cri.go:96] found id: ""
	I1222 01:34:33.781958 1661698 logs.go:282] 0 containers: []
	W1222 01:34:33.781967 1661698 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:34:33.781974 1661698 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:34:33.782035 1661698 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:34:33.809335 1661698 cri.go:96] found id: ""
	I1222 01:34:33.809412 1661698 logs.go:282] 0 containers: []
	W1222 01:34:33.809435 1661698 logs.go:284] No container was found matching "kindnet"
	I1222 01:34:33.809461 1661698 logs.go:123] Gathering logs for kubelet ...
	I1222 01:34:33.809500 1661698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:34:33.870551 1661698 logs.go:123] Gathering logs for dmesg ...
	I1222 01:34:33.870590 1661698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:34:33.887403 1661698 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:34:33.887432 1661698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:34:33.956483 1661698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:34:33.946406    5431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:34:33.946913    5431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:34:33.948611    5431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:34:33.949424    5431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:34:33.951490    5431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:34:33.946406    5431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:34:33.946913    5431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:34:33.948611    5431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:34:33.949424    5431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:34:33.951490    5431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:34:33.956566 1661698 logs.go:123] Gathering logs for containerd ...
	I1222 01:34:33.956596 1661698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:34:34.001866 1661698 logs.go:123] Gathering logs for container status ...
	I1222 01:34:34.001912 1661698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 01:34:34.042014 1661698 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001202243s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1222 01:34:34.042151 1661698 out.go:285] * 
	W1222 01:34:34.042248 1661698 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001202243s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1222 01:34:34.042299 1661698 out.go:285] * 
	W1222 01:34:34.044850 1661698 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1222 01:34:34.050486 1661698 out.go:203] 
	W1222 01:34:34.053486 1661698 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001202243s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1222 01:34:34.053539 1661698 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1222 01:34:34.053562 1661698 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1222 01:34:34.056784 1661698 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 22 01:26:14 no-preload-154186 containerd[756]: time="2025-12-22T01:26:14.470410254Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.35.0-rc.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 22 01:26:15 no-preload-154186 containerd[756]: time="2025-12-22T01:26:15.787207960Z" level=info msg="No images store for sha256:93523640e0a56d4e8b1c8a3497b218ff0cad45dc41c5de367125514543645a73"
	Dec 22 01:26:15 no-preload-154186 containerd[756]: time="2025-12-22T01:26:15.789433827Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-rc.1\""
	Dec 22 01:26:15 no-preload-154186 containerd[756]: time="2025-12-22T01:26:15.803823042Z" level=info msg="ImageCreate event name:\"sha256:a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 22 01:26:15 no-preload-154186 containerd[756]: time="2025-12-22T01:26:15.804652777Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-rc.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 22 01:26:17 no-preload-154186 containerd[756]: time="2025-12-22T01:26:17.331340037Z" level=info msg="No images store for sha256:5e4a4fe83792bf529a4e283e09069cf50cc9882d04168a33903ed6809a492e61"
	Dec 22 01:26:17 no-preload-154186 containerd[756]: time="2025-12-22T01:26:17.334400636Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\""
	Dec 22 01:26:17 no-preload-154186 containerd[756]: time="2025-12-22T01:26:17.353661044Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 22 01:26:17 no-preload-154186 containerd[756]: time="2025-12-22T01:26:17.357804008Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 22 01:26:19 no-preload-154186 containerd[756]: time="2025-12-22T01:26:19.359874060Z" level=info msg="No images store for sha256:78d3927c747311a5af27ec923ab6d07a2c1ad9cff4754323abf6c5c08cf054a5"
	Dec 22 01:26:19 no-preload-154186 containerd[756]: time="2025-12-22T01:26:19.362802572Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\""
	Dec 22 01:26:19 no-preload-154186 containerd[756]: time="2025-12-22T01:26:19.384615077Z" level=info msg="ImageCreate event name:\"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 22 01:26:19 no-preload-154186 containerd[756]: time="2025-12-22T01:26:19.388118298Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 22 01:26:21 no-preload-154186 containerd[756]: time="2025-12-22T01:26:21.047922253Z" level=info msg="No images store for sha256:e78123e3dd3a833d4e1feffb3fc0a121f3dd689abacf9b7f8984f026b95c56ec"
	Dec 22 01:26:21 no-preload-154186 containerd[756]: time="2025-12-22T01:26:21.050246394Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.0-rc.1\""
	Dec 22 01:26:21 no-preload-154186 containerd[756]: time="2025-12-22T01:26:21.057661077Z" level=info msg="ImageCreate event name:\"sha256:7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 22 01:26:21 no-preload-154186 containerd[756]: time="2025-12-22T01:26:21.058387369Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.35.0-rc.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 22 01:26:22 no-preload-154186 containerd[756]: time="2025-12-22T01:26:22.695409612Z" level=info msg="No images store for sha256:90c4ca45066b118d6cc8f6102ba2fea77739b71c04f0bdafeef225127738ea35"
	Dec 22 01:26:22 no-preload-154186 containerd[756]: time="2025-12-22T01:26:22.698274402Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-rc.1\""
	Dec 22 01:26:22 no-preload-154186 containerd[756]: time="2025-12-22T01:26:22.719228692Z" level=info msg="ImageCreate event name:\"sha256:3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 22 01:26:22 no-preload-154186 containerd[756]: time="2025-12-22T01:26:22.720159212Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-rc.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 22 01:26:23 no-preload-154186 containerd[756]: time="2025-12-22T01:26:23.239290626Z" level=info msg="No images store for sha256:7475c7d18769df89a804d5bebf679dbf94886f3626f07a2be923beaa0cc7e5b0"
	Dec 22 01:26:23 no-preload-154186 containerd[756]: time="2025-12-22T01:26:23.241741545Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\""
	Dec 22 01:26:23 no-preload-154186 containerd[756]: time="2025-12-22T01:26:23.251782100Z" level=info msg="ImageCreate event name:\"sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 22 01:26:23 no-preload-154186 containerd[756]: time="2025-12-22T01:26:23.252093930Z" level=info msg="ImageUpdate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:36:21.183609    6830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:36:21.184096    6830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:36:21.185612    6830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:36:21.187038    6830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:36:21.187376    6830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec22 00:03] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 01:36:21 up 1 day,  8:18,  0 user,  load average: 0.52, 1.01, 1.77
	Linux no-preload-154186 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 22 01:36:17 no-preload-154186 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 01:36:18 no-preload-154186 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 458.
	Dec 22 01:36:18 no-preload-154186 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:36:18 no-preload-154186 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:36:18 no-preload-154186 kubelet[6708]: E1222 01:36:18.545549    6708 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 01:36:18 no-preload-154186 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 01:36:18 no-preload-154186 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 01:36:19 no-preload-154186 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 459.
	Dec 22 01:36:19 no-preload-154186 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:36:19 no-preload-154186 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:36:19 no-preload-154186 kubelet[6713]: E1222 01:36:19.303353    6713 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 01:36:19 no-preload-154186 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 01:36:19 no-preload-154186 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 01:36:19 no-preload-154186 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 460.
	Dec 22 01:36:19 no-preload-154186 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:36:19 no-preload-154186 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:36:20 no-preload-154186 kubelet[6718]: E1222 01:36:20.049487    6718 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 01:36:20 no-preload-154186 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 01:36:20 no-preload-154186 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 01:36:20 no-preload-154186 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 461.
	Dec 22 01:36:20 no-preload-154186 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:36:20 no-preload-154186 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:36:20 no-preload-154186 kubelet[6744]: E1222 01:36:20.850789    6744 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 01:36:20 no-preload-154186 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 01:36:20 no-preload-154186 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-154186 -n no-preload-154186
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-154186 -n no-preload-154186: exit status 6 (344.975044ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1222 01:36:21.649214 1681018 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-154186" does not appear in /home/jenkins/minikube-integration/22179-1395000/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "no-preload-154186" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (103.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (370.75s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-154186 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1
E1222 01:36:29.153482 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/addons-984861/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p no-preload-154186 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1: exit status 80 (6m8.077508672s)

                                                
                                                
-- stdout --
	* [no-preload-154186] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22179
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22179-1395000/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1395000/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "no-preload-154186" primary control-plane node in "no-preload-154186" cluster
	* Pulling base image v0.0.48-1766219634-22260 ...
	* Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.1 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1222 01:36:23.234823 1681323 out.go:360] Setting OutFile to fd 1 ...
	I1222 01:36:23.235027 1681323 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:36:23.235049 1681323 out.go:374] Setting ErrFile to fd 2...
	I1222 01:36:23.235067 1681323 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:36:23.235421 1681323 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1395000/.minikube/bin
	I1222 01:36:23.235901 1681323 out.go:368] Setting JSON to false
	I1222 01:36:23.237129 1681323 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":116336,"bootTime":1766251047,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1222 01:36:23.237207 1681323 start.go:143] virtualization:  
	I1222 01:36:23.240218 1681323 out.go:179] * [no-preload-154186] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1222 01:36:23.244197 1681323 out.go:179]   - MINIKUBE_LOCATION=22179
	I1222 01:36:23.244262 1681323 notify.go:221] Checking for updates...
	I1222 01:36:23.247308 1681323 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 01:36:23.251437 1681323 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-1395000/kubeconfig
	I1222 01:36:23.254483 1681323 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1395000/.minikube
	I1222 01:36:23.257414 1681323 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1222 01:36:23.260441 1681323 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 01:36:23.264003 1681323 config.go:182] Loaded profile config "no-preload-154186": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1222 01:36:23.264837 1681323 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 01:36:23.295171 1681323 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1222 01:36:23.295305 1681323 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:36:23.350537 1681323 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 01:36:23.341149383 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:36:23.350648 1681323 docker.go:319] overlay module found
	I1222 01:36:23.353847 1681323 out.go:179] * Using the docker driver based on existing profile
	I1222 01:36:23.356749 1681323 start.go:309] selected driver: docker
	I1222 01:36:23.356777 1681323 start.go:928] validating driver "docker" against &{Name:no-preload-154186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-154186 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26214
4 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:36:23.356883 1681323 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 01:36:23.357613 1681323 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:36:23.417124 1681323 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 01:36:23.408084515 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:36:23.417456 1681323 start_flags.go:995] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1222 01:36:23.417483 1681323 cni.go:84] Creating CNI manager for ""
	I1222 01:36:23.417540 1681323 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1222 01:36:23.417589 1681323 start.go:353] cluster config:
	{Name:no-preload-154186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-154186 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:36:23.420666 1681323 out.go:179] * Starting "no-preload-154186" primary control-plane node in "no-preload-154186" cluster
	I1222 01:36:23.423625 1681323 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1222 01:36:23.426661 1681323 out.go:179] * Pulling base image v0.0.48-1766219634-22260 ...
	I1222 01:36:23.429673 1681323 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1222 01:36:23.429835 1681323 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/no-preload-154186/config.json ...
	I1222 01:36:23.430154 1681323 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1222 01:36:23.430238 1681323 cache.go:107] acquiring lock: {Name:mk3bde21e751b3aa3caf7a41c8a37e36cec6e7cf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:36:23.430340 1681323 cache.go:115] /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1222 01:36:23.430349 1681323 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 122.997µs
	I1222 01:36:23.430379 1681323 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1222 01:36:23.430401 1681323 cache.go:107] acquiring lock: {Name:mk4a15c8225bf94a78b514d4142ea41c6bb91faa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:36:23.430458 1681323 cache.go:115] /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 exists
	I1222 01:36:23.430472 1681323 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1" took 72.633µs
	I1222 01:36:23.430491 1681323 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 succeeded
	I1222 01:36:23.430523 1681323 cache.go:107] acquiring lock: {Name:mkeb24b7f997eb1a1a3d59e2a2d68597fffc7c36 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:36:23.430589 1681323 cache.go:115] /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 exists
	I1222 01:36:23.430602 1681323 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1" took 94.27µs
	I1222 01:36:23.430610 1681323 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 succeeded
	I1222 01:36:23.430636 1681323 cache.go:107] acquiring lock: {Name:mkf2939c17635a47347d3721871a718b69a7a19c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:36:23.430687 1681323 cache.go:115] /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 exists
	I1222 01:36:23.430709 1681323 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1" took 74.782µs
	I1222 01:36:23.430717 1681323 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 succeeded
	I1222 01:36:23.430735 1681323 cache.go:107] acquiring lock: {Name:mk1daf2f1163a462fd1f82e12b9d4b157cffc772 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:36:23.430785 1681323 cache.go:115] /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 exists
	I1222 01:36:23.430802 1681323 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1" took 62.638µs
	I1222 01:36:23.430824 1681323 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 succeeded
	I1222 01:36:23.430840 1681323 cache.go:107] acquiring lock: {Name:mk48171dacff6bbfb8016f0e5908022e81e1ea85 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:36:23.430924 1681323 cache.go:115] /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1222 01:36:23.430937 1681323 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 103.344µs
	I1222 01:36:23.430969 1681323 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1222 01:36:23.431003 1681323 cache.go:107] acquiring lock: {Name:mkc08548a3ab9782a3dcbbb4e211790535cb9d14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:36:23.431057 1681323 cache.go:115] /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 exists
	I1222 01:36:23.431070 1681323 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0" took 69.399µs
	I1222 01:36:23.431089 1681323 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1222 01:36:23.431107 1681323 cache.go:107] acquiring lock: {Name:mk2f653a9914a185aaa3299c67a548da6098dcf3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:36:23.431143 1681323 cache.go:115] /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1222 01:36:23.431164 1681323 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 48.804µs
	I1222 01:36:23.431176 1681323 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1222 01:36:23.431183 1681323 cache.go:87] Successfully saved all images to host disk.
	I1222 01:36:23.450810 1681323 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon, skipping pull
	I1222 01:36:23.450833 1681323 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 exists in daemon, skipping load
	I1222 01:36:23.450848 1681323 cache.go:243] Successfully downloaded all kic artifacts
	I1222 01:36:23.450878 1681323 start.go:360] acquireMachinesLock for no-preload-154186: {Name:mk9dee4f9b1c44d5e40729915965cd9e314df88f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:36:23.450936 1681323 start.go:364] duration metric: took 37.506µs to acquireMachinesLock for "no-preload-154186"
	I1222 01:36:23.450961 1681323 start.go:96] Skipping create...Using existing machine configuration
	I1222 01:36:23.450970 1681323 fix.go:54] fixHost starting: 
	I1222 01:36:23.451228 1681323 cli_runner.go:164] Run: docker container inspect no-preload-154186 --format={{.State.Status}}
	I1222 01:36:23.468570 1681323 fix.go:112] recreateIfNeeded on no-preload-154186: state=Stopped err=<nil>
	W1222 01:36:23.468607 1681323 fix.go:138] unexpected machine state, will restart: <nil>
	I1222 01:36:23.472031 1681323 out.go:252] * Restarting existing docker container for "no-preload-154186" ...
	I1222 01:36:23.472128 1681323 cli_runner.go:164] Run: docker start no-preload-154186
	I1222 01:36:23.751686 1681323 cli_runner.go:164] Run: docker container inspect no-preload-154186 --format={{.State.Status}}
	I1222 01:36:23.775088 1681323 kic.go:430] container "no-preload-154186" state is running.
	I1222 01:36:23.775522 1681323 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-154186
	I1222 01:36:23.804788 1681323 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/no-preload-154186/config.json ...
	I1222 01:36:23.805037 1681323 machine.go:94] provisionDockerMachine start ...
	I1222 01:36:23.805105 1681323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-154186
	I1222 01:36:23.831796 1681323 main.go:144] libmachine: Using SSH client type: native
	I1222 01:36:23.832139 1681323 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38702 <nil> <nil>}
	I1222 01:36:23.832149 1681323 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 01:36:23.834213 1681323 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1222 01:36:26.965689 1681323 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-154186
	
	I1222 01:36:26.965717 1681323 ubuntu.go:182] provisioning hostname "no-preload-154186"
	I1222 01:36:26.965785 1681323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-154186
	I1222 01:36:26.985217 1681323 main.go:144] libmachine: Using SSH client type: native
	I1222 01:36:26.985542 1681323 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38702 <nil> <nil>}
	I1222 01:36:26.985560 1681323 main.go:144] libmachine: About to run SSH command:
	sudo hostname no-preload-154186 && echo "no-preload-154186" | sudo tee /etc/hostname
	I1222 01:36:27.127502 1681323 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-154186
	
	I1222 01:36:27.127590 1681323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-154186
	I1222 01:36:27.145587 1681323 main.go:144] libmachine: Using SSH client type: native
	I1222 01:36:27.145900 1681323 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38702 <nil> <nil>}
	I1222 01:36:27.145916 1681323 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-154186' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-154186/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-154186' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1222 01:36:27.278718 1681323 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 01:36:27.278747 1681323 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22179-1395000/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-1395000/.minikube}
	I1222 01:36:27.278768 1681323 ubuntu.go:190] setting up certificates
	I1222 01:36:27.278786 1681323 provision.go:84] configureAuth start
	I1222 01:36:27.278873 1681323 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-154186
	I1222 01:36:27.301231 1681323 provision.go:143] copyHostCerts
	I1222 01:36:27.301308 1681323 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.pem, removing ...
	I1222 01:36:27.301328 1681323 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.pem
	I1222 01:36:27.301409 1681323 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.pem (1082 bytes)
	I1222 01:36:27.301556 1681323 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1395000/.minikube/cert.pem, removing ...
	I1222 01:36:27.301569 1681323 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1395000/.minikube/cert.pem
	I1222 01:36:27.301598 1681323 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-1395000/.minikube/cert.pem (1123 bytes)
	I1222 01:36:27.301659 1681323 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1395000/.minikube/key.pem, removing ...
	I1222 01:36:27.301669 1681323 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1395000/.minikube/key.pem
	I1222 01:36:27.301695 1681323 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-1395000/.minikube/key.pem (1679 bytes)
	I1222 01:36:27.301746 1681323 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca-key.pem org=jenkins.no-preload-154186 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-154186]
	I1222 01:36:27.754512 1681323 provision.go:177] copyRemoteCerts
	I1222 01:36:27.754594 1681323 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1222 01:36:27.754648 1681323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-154186
	I1222 01:36:27.772550 1681323 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38702 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/no-preload-154186/id_rsa Username:docker}
	I1222 01:36:27.874202 1681323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1222 01:36:27.892571 1681323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1222 01:36:27.911007 1681323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1222 01:36:27.928834 1681323 provision.go:87] duration metric: took 650.003977ms to configureAuth
	I1222 01:36:27.928863 1681323 ubuntu.go:206] setting minikube options for container-runtime
	I1222 01:36:27.929086 1681323 config.go:182] Loaded profile config "no-preload-154186": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1222 01:36:27.929099 1681323 machine.go:97] duration metric: took 4.124054244s to provisionDockerMachine
	I1222 01:36:27.929107 1681323 start.go:293] postStartSetup for "no-preload-154186" (driver="docker")
	I1222 01:36:27.929119 1681323 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1222 01:36:27.929165 1681323 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1222 01:36:27.929208 1681323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-154186
	I1222 01:36:27.946963 1681323 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38702 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/no-preload-154186/id_rsa Username:docker}
	I1222 01:36:28.042660 1681323 ssh_runner.go:195] Run: cat /etc/os-release
	I1222 01:36:28.046171 1681323 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1222 01:36:28.046204 1681323 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1222 01:36:28.046222 1681323 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1395000/.minikube/addons for local assets ...
	I1222 01:36:28.046287 1681323 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1395000/.minikube/files for local assets ...
	I1222 01:36:28.046377 1681323 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem -> 13968642.pem in /etc/ssl/certs
	I1222 01:36:28.046485 1681323 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1222 01:36:28.054291 1681323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem --> /etc/ssl/certs/13968642.pem (1708 bytes)
	I1222 01:36:28.073024 1681323 start.go:296] duration metric: took 143.901056ms for postStartSetup
	I1222 01:36:28.073108 1681323 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 01:36:28.073167 1681323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-154186
	I1222 01:36:28.091267 1681323 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38702 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/no-preload-154186/id_rsa Username:docker}
	I1222 01:36:28.183597 1681323 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1222 01:36:28.188658 1681323 fix.go:56] duration metric: took 4.737681885s for fixHost
	I1222 01:36:28.188687 1681323 start.go:83] releasing machines lock for "no-preload-154186", held for 4.737736532s
	I1222 01:36:28.188793 1681323 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-154186
	I1222 01:36:28.206039 1681323 ssh_runner.go:195] Run: cat /version.json
	I1222 01:36:28.206158 1681323 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1222 01:36:28.206221 1681323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-154186
	I1222 01:36:28.206378 1681323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-154186
	I1222 01:36:28.224770 1681323 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38702 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/no-preload-154186/id_rsa Username:docker}
	I1222 01:36:28.230258 1681323 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38702 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/no-preload-154186/id_rsa Username:docker}
	I1222 01:36:28.414932 1681323 ssh_runner.go:195] Run: systemctl --version
	I1222 01:36:28.421366 1681323 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1222 01:36:28.425653 1681323 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1222 01:36:28.425721 1681323 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1222 01:36:28.433525 1681323 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1222 01:36:28.433549 1681323 start.go:496] detecting cgroup driver to use...
	I1222 01:36:28.433582 1681323 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 01:36:28.433651 1681323 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1222 01:36:28.451333 1681323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1222 01:36:28.464888 1681323 docker.go:218] disabling cri-docker service (if available) ...
	I1222 01:36:28.464974 1681323 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1222 01:36:28.480732 1681323 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1222 01:36:28.494042 1681323 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1222 01:36:28.611667 1681323 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1222 01:36:28.731604 1681323 docker.go:234] disabling docker service ...
	I1222 01:36:28.731674 1681323 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1222 01:36:28.747773 1681323 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1222 01:36:28.761732 1681323 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1222 01:36:28.883133 1681323 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1222 01:36:29.013965 1681323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1222 01:36:29.029996 1681323 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 01:36:29.046133 1681323 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1222 01:36:29.056270 1681323 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1222 01:36:29.066036 1681323 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1222 01:36:29.066163 1681323 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1222 01:36:29.075930 1681323 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1222 01:36:29.084710 1681323 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1222 01:36:29.093653 1681323 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1222 01:36:29.102647 1681323 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1222 01:36:29.110826 1681323 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1222 01:36:29.119665 1681323 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1222 01:36:29.128698 1681323 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1222 01:36:29.137543 1681323 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1222 01:36:29.145415 1681323 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1222 01:36:29.153357 1681323 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:36:29.268778 1681323 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1222 01:36:29.366806 1681323 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1222 01:36:29.366878 1681323 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1222 01:36:29.370821 1681323 start.go:564] Will wait 60s for crictl version
	I1222 01:36:29.370889 1681323 ssh_runner.go:195] Run: which crictl
	I1222 01:36:29.374398 1681323 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1222 01:36:29.401622 1681323 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.1
	RuntimeApiVersion:  v1
	I1222 01:36:29.401693 1681323 ssh_runner.go:195] Run: containerd --version
	I1222 01:36:29.425502 1681323 ssh_runner.go:195] Run: containerd --version
	I1222 01:36:29.452207 1681323 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.1 ...
	I1222 01:36:29.455184 1681323 cli_runner.go:164] Run: docker network inspect no-preload-154186 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 01:36:29.471412 1681323 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1222 01:36:29.475195 1681323 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 01:36:29.484943 1681323 kubeadm.go:884] updating cluster {Name:no-preload-154186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-154186 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1222 01:36:29.485070 1681323 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1222 01:36:29.485129 1681323 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 01:36:29.515771 1681323 containerd.go:627] all images are preloaded for containerd runtime.
	I1222 01:36:29.515798 1681323 cache_images.go:86] Images are preloaded, skipping loading
	I1222 01:36:29.515812 1681323 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-rc.1 containerd true true} ...
	I1222 01:36:29.515907 1681323 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-154186 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-154186 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1222 01:36:29.515977 1681323 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
	I1222 01:36:29.544359 1681323 cni.go:84] Creating CNI manager for ""
	I1222 01:36:29.544384 1681323 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1222 01:36:29.544401 1681323 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1222 01:36:29.544424 1681323 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-154186 NodeName:no-preload-154186 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1222 01:36:29.544539 1681323 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-154186"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1222 01:36:29.544615 1681323 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1222 01:36:29.552325 1681323 binaries.go:51] Found k8s binaries, skipping transfer
	I1222 01:36:29.552411 1681323 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1222 01:36:29.560003 1681323 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1222 01:36:29.572789 1681323 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1222 01:36:29.585517 1681323 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1222 01:36:29.599349 1681323 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1222 01:36:29.603106 1681323 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 01:36:29.612969 1681323 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:36:29.733862 1681323 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 01:36:29.752522 1681323 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/no-preload-154186 for IP: 192.168.85.2
	I1222 01:36:29.752545 1681323 certs.go:195] generating shared ca certs ...
	I1222 01:36:29.752562 1681323 certs.go:227] acquiring lock for ca certs: {Name:mk4e1172e73c8d9b926824a39d7e920772302ed7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:36:29.752701 1681323 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.key
	I1222 01:36:29.752747 1681323 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.key
	I1222 01:36:29.752758 1681323 certs.go:257] generating profile certs ...
	I1222 01:36:29.752867 1681323 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/no-preload-154186/client.key
	I1222 01:36:29.752925 1681323 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/no-preload-154186/apiserver.key.e54c24a5
	I1222 01:36:29.752976 1681323 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/no-preload-154186/proxy-client.key
	I1222 01:36:29.753099 1681323 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/1396864.pem (1338 bytes)
	W1222 01:36:29.753135 1681323 certs.go:480] ignoring /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/1396864_empty.pem, impossibly tiny 0 bytes
	I1222 01:36:29.753147 1681323 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca-key.pem (1675 bytes)
	I1222 01:36:29.753174 1681323 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem (1082 bytes)
	I1222 01:36:29.753203 1681323 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem (1123 bytes)
	I1222 01:36:29.753232 1681323 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/key.pem (1679 bytes)
	I1222 01:36:29.753285 1681323 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem (1708 bytes)
	I1222 01:36:29.753910 1681323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1222 01:36:29.782071 1681323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1222 01:36:29.803383 1681323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1222 01:36:29.824019 1681323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1222 01:36:29.845035 1681323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/no-preload-154186/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1222 01:36:29.866115 1681323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/no-preload-154186/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1222 01:36:29.883918 1681323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/no-preload-154186/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1222 01:36:29.900943 1681323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/no-preload-154186/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1222 01:36:29.918714 1681323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1222 01:36:29.936559 1681323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/1396864.pem --> /usr/share/ca-certificates/1396864.pem (1338 bytes)
	I1222 01:36:29.954160 1681323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem --> /usr/share/ca-certificates/13968642.pem (1708 bytes)
	I1222 01:36:29.972189 1681323 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1222 01:36:29.985434 1681323 ssh_runner.go:195] Run: openssl version
	I1222 01:36:29.992444 1681323 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1396864.pem
	I1222 01:36:30.000140 1681323 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1396864.pem /etc/ssl/certs/1396864.pem
	I1222 01:36:30.014964 1681323 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1396864.pem
	I1222 01:36:30.043109 1681323 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 00:13 /usr/share/ca-certificates/1396864.pem
	I1222 01:36:30.043223 1681323 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1396864.pem
	I1222 01:36:30.108760 1681323 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1222 01:36:30.118305 1681323 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/13968642.pem
	I1222 01:36:30.127792 1681323 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/13968642.pem /etc/ssl/certs/13968642.pem
	I1222 01:36:30.136802 1681323 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13968642.pem
	I1222 01:36:30.141548 1681323 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 00:13 /usr/share/ca-certificates/13968642.pem
	I1222 01:36:30.141643 1681323 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13968642.pem
	I1222 01:36:30.184623 1681323 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1222 01:36:30.193382 1681323 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:36:30.201724 1681323 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1222 01:36:30.210242 1681323 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:36:30.214881 1681323 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 00:04 /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:36:30.214969 1681323 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:36:30.256748 1681323 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1222 01:36:30.264842 1681323 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 01:36:30.268912 1681323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1222 01:36:30.310683 1681323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1222 01:36:30.352386 1681323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1222 01:36:30.393519 1681323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1222 01:36:30.434377 1681323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1222 01:36:30.475355 1681323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1222 01:36:30.540692 1681323 kubeadm.go:401] StartCluster: {Name:no-preload-154186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-154186 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:36:30.540782 1681323 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1222 01:36:30.540866 1681323 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 01:36:30.575229 1681323 cri.go:96] found id: ""
	I1222 01:36:30.575312 1681323 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1222 01:36:30.584220 1681323 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1222 01:36:30.584293 1681323 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1222 01:36:30.584391 1681323 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1222 01:36:30.594816 1681323 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1222 01:36:30.595221 1681323 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-154186" does not appear in /home/jenkins/minikube-integration/22179-1395000/kubeconfig
	I1222 01:36:30.595322 1681323 kubeconfig.go:62] /home/jenkins/minikube-integration/22179-1395000/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-154186" cluster setting kubeconfig missing "no-preload-154186" context setting]
	I1222 01:36:30.595620 1681323 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/kubeconfig: {Name:mk08a9ce05fb3d2aec66c2d4776a38c1f64c42df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:36:30.596925 1681323 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1222 01:36:30.604842 1681323 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1222 01:36:30.604873 1681323 kubeadm.go:602] duration metric: took 20.560605ms to restartPrimaryControlPlane
	I1222 01:36:30.604883 1681323 kubeadm.go:403] duration metric: took 64.203267ms to StartCluster
	I1222 01:36:30.604898 1681323 settings.go:142] acquiring lock: {Name:mk10fbc51e5384d725fd46ab36048358281cb87b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:36:30.604963 1681323 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22179-1395000/kubeconfig
	I1222 01:36:30.605576 1681323 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/kubeconfig: {Name:mk08a9ce05fb3d2aec66c2d4776a38c1f64c42df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:36:30.605779 1681323 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1222 01:36:30.606072 1681323 config.go:182] Loaded profile config "no-preload-154186": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1222 01:36:30.606145 1681323 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1222 01:36:30.606208 1681323 addons.go:70] Setting storage-provisioner=true in profile "no-preload-154186"
	I1222 01:36:30.606221 1681323 addons.go:239] Setting addon storage-provisioner=true in "no-preload-154186"
	I1222 01:36:30.606247 1681323 host.go:66] Checking if "no-preload-154186" exists ...
	I1222 01:36:30.606455 1681323 addons.go:70] Setting dashboard=true in profile "no-preload-154186"
	I1222 01:36:30.606480 1681323 addons.go:239] Setting addon dashboard=true in "no-preload-154186"
	W1222 01:36:30.606487 1681323 addons.go:248] addon dashboard should already be in state true
	I1222 01:36:30.606508 1681323 host.go:66] Checking if "no-preload-154186" exists ...
	I1222 01:36:30.606709 1681323 cli_runner.go:164] Run: docker container inspect no-preload-154186 --format={{.State.Status}}
	I1222 01:36:30.606923 1681323 cli_runner.go:164] Run: docker container inspect no-preload-154186 --format={{.State.Status}}
	I1222 01:36:30.609388 1681323 addons.go:70] Setting default-storageclass=true in profile "no-preload-154186"
	I1222 01:36:30.609534 1681323 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-154186"
	I1222 01:36:30.610760 1681323 cli_runner.go:164] Run: docker container inspect no-preload-154186 --format={{.State.Status}}
	I1222 01:36:30.611641 1681323 out.go:179] * Verifying Kubernetes components...
	I1222 01:36:30.614570 1681323 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:36:30.635770 1681323 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1222 01:36:30.638688 1681323 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 01:36:30.638712 1681323 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1222 01:36:30.638781 1681323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-154186
	I1222 01:36:30.665572 1681323 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1222 01:36:30.669104 1681323 addons.go:239] Setting addon default-storageclass=true in "no-preload-154186"
	I1222 01:36:30.669154 1681323 host.go:66] Checking if "no-preload-154186" exists ...
	I1222 01:36:30.669590 1681323 cli_runner.go:164] Run: docker container inspect no-preload-154186 --format={{.State.Status}}
	I1222 01:36:30.683959 1681323 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1222 01:36:30.687520 1681323 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1222 01:36:30.687549 1681323 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1222 01:36:30.687626 1681323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-154186
	I1222 01:36:30.694403 1681323 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38702 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/no-preload-154186/id_rsa Username:docker}
	I1222 01:36:30.702255 1681323 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1222 01:36:30.702278 1681323 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1222 01:36:30.702352 1681323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-154186
	I1222 01:36:30.734213 1681323 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38702 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/no-preload-154186/id_rsa Username:docker}
	I1222 01:36:30.746998 1681323 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38702 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/no-preload-154186/id_rsa Username:docker}
	I1222 01:36:30.831368 1681323 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 01:36:30.859591 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 01:36:30.874776 1681323 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1222 01:36:30.874854 1681323 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1222 01:36:30.886571 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1222 01:36:30.896408 1681323 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1222 01:36:30.896491 1681323 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1222 01:36:30.935406 1681323 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1222 01:36:30.935480 1681323 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1222 01:36:30.980952 1681323 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1222 01:36:30.980974 1681323 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1222 01:36:30.995662 1681323 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1222 01:36:30.995686 1681323 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1222 01:36:31.011181 1681323 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1222 01:36:31.011207 1681323 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1222 01:36:31.025817 1681323 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1222 01:36:31.025897 1681323 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1222 01:36:31.040425 1681323 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1222 01:36:31.040451 1681323 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1222 01:36:31.053847 1681323 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1222 01:36:31.053877 1681323 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1222 01:36:31.068203 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1222 01:36:31.247341 1681323 node_ready.go:35] waiting up to 6m0s for node "no-preload-154186" to be "Ready" ...
	W1222 01:36:31.247593 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:36:31.247631 1681323 retry.go:84] will retry after 300ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:36:31.247506 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:36:31.247875 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:36:31.447386 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:36:31.513087 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:36:31.526286 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 01:36:31.572817 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:36:31.587392 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:36:31.642567 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:36:31.848456 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:36:31.905960 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:36:32.073298 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:36:32.132496 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:36:32.203849 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:36:32.270984 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:36:32.342424 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:36:32.407926 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:36:32.631301 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:36:32.690048 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:36:32.962216 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:36:33.025131 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:36:33.248294 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:36:33.408735 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:36:33.475146 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:36:33.554384 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1222 01:36:33.564336 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:36:33.639233 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:36:33.648250 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:36:34.651164 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:36:34.715118 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:36:34.728333 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1222 01:36:34.766376 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:36:34.808648 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:36:34.845664 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:36:35.248568 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:36:36.090694 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:36:36.156793 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:36:36.271773 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:36:36.333746 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:36:36.520979 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:36:36.615878 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:36:37.748720 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:36:38.179203 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:36:38.240963 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:36:38.510984 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:36:38.571044 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:36:39.749000 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:36:40.373298 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:36:40.435360 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:36:40.473700 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:36:40.535044 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:36:41.925479 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:36:41.983644 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:36:42.248915 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:36:44.748032 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:36:45.973776 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:36:46.041089 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:36:46.041130 1681323 retry.go:84] will retry after 8s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:36:46.497259 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:36:46.561444 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:36:46.749030 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:36:47.516501 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:36:47.578863 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:36:49.248854 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:36:51.748544 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:36:51.887416 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:36:51.983037 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:36:51.983081 1681323 retry.go:84] will retry after 11.6s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:36:53.748772 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:36:54.034268 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:36:54.095252 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:36:56.248219 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:36:58.248513 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:36:59.627564 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:36:59.686822 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:36:59.686859 1681323 retry.go:84] will retry after 13.5s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:37:00.248760 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:37:02.748124 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:37:03.607385 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:37:03.671489 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:37:03.671533 1681323 retry.go:84] will retry after 9.3s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:37:04.749026 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:37:07.248861 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:37:09.248980 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:37:09.255139 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:37:09.316591 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:37:09.316628 1681323 retry.go:84] will retry after 18s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:37:11.749013 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:37:12.971520 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:37:13.036863 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:37:13.036909 1681323 retry.go:84] will retry after 32.1s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:37:13.196332 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:37:13.286762 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:37:14.248137 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:37:16.248491 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:37:18.748811 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:37:21.248255 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:37:23.748025 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:37:26.248038 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:37:27.347403 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:37:27.407561 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:37:27.407597 1681323 retry.go:84] will retry after 34.7s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:37:28.248450 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:37:30.748265 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:37:33.248411 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:37:35.748090 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:37:37.748168 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:37:39.748842 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:37:42.249134 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:37:43.405621 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:37:43.463716 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:37:43.463758 1681323 retry.go:84] will retry after 31.5s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:37:44.748410 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:37:45.174172 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:37:45.310864 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:37:47.248085 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:37:49.248654 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:37:51.249044 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:37:53.748668 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:37:56.248083 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:37:58.248588 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:38:00.748407 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:38:02.126827 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:38:02.186335 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:38:02.186432 1681323 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1222 01:38:02.748918 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:38:05.248068 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:38:07.248197 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:38:09.249028 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:38:11.748079 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:38:13.748193 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:38:14.875348 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:38:14.937026 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:38:14.937135 1681323 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1222 01:38:14.956170 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:38:15.025663 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:38:15.025778 1681323 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1222 01:38:15.029889 1681323 out.go:179] * Enabled addons: 
	I1222 01:38:15.035089 1681323 addons.go:530] duration metric: took 1m44.428931378s for enable addons: enabled=[]
	W1222 01:38:16.248009 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:38:18.248678 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:38:20.748366 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:38:23.248345 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:38:25.248576 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:38:27.249028 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:38:29.750532 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:38:32.248109 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:38:34.249047 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:38:36.748953 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:38:39.248045 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:38:41.248244 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:38:43.248311 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:38:45.249134 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:38:47.748896 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:38:49.748932 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:38:52.248838 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:38:54.748014 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:38:56.748733 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:38:59.248212 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:39:01.248492 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:39:03.747982 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:39:05.748583 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:39:07.748998 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:39:10.248934 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:39:12.748076 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:39:14.748959 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:39:17.248674 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:39:19.748622 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:39:21.749035 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:39:24.248544 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:39:26.747990 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:39:28.748186 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:39:30.748280 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:39:32.748600 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:39:35.249008 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:39:37.748582 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:39:40.248054 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:39:42.748083 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:39:44.748908 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:39:47.248090 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:39:49.249046 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:39:51.748032 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:39:53.749035 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:39:56.248725 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:39:58.748290 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:40:00.756406 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:40:03.248649 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:40:05.249130 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:40:07.749016 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:40:10.247989 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:40:12.248610 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:40:14.747949 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:40:16.749012 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:40:19.248000 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:40:21.248090 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:40:23.248426 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:40:25.748112 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:40:27.748979 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:40:30.247997 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:40:32.248097 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:40:34.248867 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:40:36.748755 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:40:38.748943 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:40:41.248791 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:40:43.748820 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:40:45.748974 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:40:48.248802 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:40:50.748310 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:40:53.248434 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:40:55.748007 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:40:57.748191 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:41:00.248610 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:41:02.248653 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:41:04.248851 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:41:06.748841 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:41:09.248676 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:41:11.748069 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:41:14.248149 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:41:16.748036 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:41:18.748733 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:41:21.248234 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:41:23.248384 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:41:25.748529 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:41:27.748967 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:41:30.247999 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:41:32.248426 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:41:34.248875 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:41:36.748177 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:41:39.248106 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:41:41.748067 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:41:44.248599 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:41:46.748094 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:41:48.749155 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:41:51.248977 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:41:53.748987 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:41:56.248859 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:41:58.748026 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:42:00.748292 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:42:03.248327 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:42:05.748676 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:42:08.248287 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:42:10.748100 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:42:12.748911 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:42:14.749008 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:42:17.249086 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:42:19.748284 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:42:22.248100 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:42:24.248221 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:42:26.748069 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:42:29.248000 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:42:31.247763 1681323 node_ready.go:38] duration metric: took 6m0.000217195s for node "no-preload-154186" to be "Ready" ...
	I1222 01:42:31.251066 1681323 out.go:203] 
	W1222 01:42:31.253946 1681323 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1222 01:42:31.253969 1681323 out.go:285] * 
	* 
	W1222 01:42:31.256107 1681323 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1222 01:42:31.259342 1681323 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p no-preload-154186 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1": exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-154186
helpers_test.go:244: (dbg) docker inspect no-preload-154186:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "66e4ffc32ac4d9a593d1b758510a0770f54cbfe4fc8bd1c7d5d5625196ae8de7",
	        "Created": "2025-12-22T01:26:03.981102368Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1681449,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-22T01:36:23.503691575Z",
	            "FinishedAt": "2025-12-22T01:36:22.112804811Z"
	        },
	        "Image": "sha256:065a636b8735485f57df2b02ed6532902f189a9c5dc304ae0ae68a778e1c9b2c",
	        "ResolvConfPath": "/var/lib/docker/containers/66e4ffc32ac4d9a593d1b758510a0770f54cbfe4fc8bd1c7d5d5625196ae8de7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/66e4ffc32ac4d9a593d1b758510a0770f54cbfe4fc8bd1c7d5d5625196ae8de7/hostname",
	        "HostsPath": "/var/lib/docker/containers/66e4ffc32ac4d9a593d1b758510a0770f54cbfe4fc8bd1c7d5d5625196ae8de7/hosts",
	        "LogPath": "/var/lib/docker/containers/66e4ffc32ac4d9a593d1b758510a0770f54cbfe4fc8bd1c7d5d5625196ae8de7/66e4ffc32ac4d9a593d1b758510a0770f54cbfe4fc8bd1c7d5d5625196ae8de7-json.log",
	        "Name": "/no-preload-154186",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-154186:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-154186",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "66e4ffc32ac4d9a593d1b758510a0770f54cbfe4fc8bd1c7d5d5625196ae8de7",
	                "LowerDir": "/var/lib/docker/overlay2/c8373d16a8e2d6ece6c54ed2fe561f951d426d306116bd1c2373e51241872490-init/diff:/var/lib/docker/overlay2/fdfc0dccbb3b6766b7981a5669907f62e120bbe774767ccbee34e6115374625f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c8373d16a8e2d6ece6c54ed2fe561f951d426d306116bd1c2373e51241872490/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c8373d16a8e2d6ece6c54ed2fe561f951d426d306116bd1c2373e51241872490/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c8373d16a8e2d6ece6c54ed2fe561f951d426d306116bd1c2373e51241872490/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-154186",
	                "Source": "/var/lib/docker/volumes/no-preload-154186/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-154186",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-154186",
	                "name.minikube.sigs.k8s.io": "no-preload-154186",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "79de2efae0fd51e067446e17772315f189c10d5767e33af4ebd104752f65737c",
	            "SandboxKey": "/var/run/docker/netns/79de2efae0fd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38702"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38703"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38706"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38704"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38705"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-154186": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "be:c4:4d:4a:c4:7b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "16496b4bf71c897a797bd98394f7b6821dd2bd1d65c81e9f1efd0fcd1f14d1a5",
	                    "EndpointID": "47af66f9da650982ed99a47d4f083adda357be5441350f59f6280b70b837f98e",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-154186",
	                        "66e4ffc32ac4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-154186 -n no-preload-154186
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-154186 -n no-preload-154186: exit status 2 (416.599562ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-154186 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p no-preload-154186 logs -n 25: (1.100938237s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───
──────────────────┐
	│ COMMAND │                                                                                                                           ARGS                                                                                                                           │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───
──────────────────┤
	│ image   │ default-k8s-diff-port-778490 image list --format=json                                                                                                                                                                                                    │ default-k8s-diff-port-778490 │ jenkins │ v1.37.0 │ 22 Dec 25 01:25 UTC │ 22 Dec 25 01:25 UTC │
	│ pause   │ -p default-k8s-diff-port-778490 --alsologtostderr -v=1                                                                                                                                                                                                   │ default-k8s-diff-port-778490 │ jenkins │ v1.37.0 │ 22 Dec 25 01:25 UTC │ 22 Dec 25 01:25 UTC │
	│ unpause │ -p default-k8s-diff-port-778490 --alsologtostderr -v=1                                                                                                                                                                                                   │ default-k8s-diff-port-778490 │ jenkins │ v1.37.0 │ 22 Dec 25 01:25 UTC │ 22 Dec 25 01:25 UTC │
	│ delete  │ -p default-k8s-diff-port-778490                                                                                                                                                                                                                          │ default-k8s-diff-port-778490 │ jenkins │ v1.37.0 │ 22 Dec 25 01:25 UTC │ 22 Dec 25 01:26 UTC │
	│ delete  │ -p default-k8s-diff-port-778490                                                                                                                                                                                                                          │ default-k8s-diff-port-778490 │ jenkins │ v1.37.0 │ 22 Dec 25 01:26 UTC │ 22 Dec 25 01:26 UTC │
	│ delete  │ -p disable-driver-mounts-459348                                                                                                                                                                                                                          │ disable-driver-mounts-459348 │ jenkins │ v1.37.0 │ 22 Dec 25 01:26 UTC │ 22 Dec 25 01:26 UTC │
	│ start   │ -p no-preload-154186 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-154186            │ jenkins │ v1.37.0 │ 22 Dec 25 01:26 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-980842 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                 │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:27 UTC │ 22 Dec 25 01:27 UTC │
	│ stop    │ -p embed-certs-980842 --alsologtostderr -v=3                                                                                                                                                                                                             │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:27 UTC │ 22 Dec 25 01:27 UTC │
	│ addons  │ enable dashboard -p embed-certs-980842 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                            │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:27 UTC │ 22 Dec 25 01:27 UTC │
	│ start   │ -p embed-certs-980842 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                                             │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:27 UTC │ 22 Dec 25 01:28 UTC │
	│ image   │ embed-certs-980842 image list --format=json                                                                                                                                                                                                              │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:28 UTC │ 22 Dec 25 01:28 UTC │
	│ pause   │ -p embed-certs-980842 --alsologtostderr -v=1                                                                                                                                                                                                             │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:28 UTC │ 22 Dec 25 01:28 UTC │
	│ unpause │ -p embed-certs-980842 --alsologtostderr -v=1                                                                                                                                                                                                             │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:28 UTC │ 22 Dec 25 01:28 UTC │
	│ delete  │ -p embed-certs-980842                                                                                                                                                                                                                                    │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:28 UTC │ 22 Dec 25 01:28 UTC │
	│ delete  │ -p embed-certs-980842                                                                                                                                                                                                                                    │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:28 UTC │ 22 Dec 25 01:28 UTC │
	│ start   │ -p newest-cni-869293 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ newest-cni-869293            │ jenkins │ v1.37.0 │ 22 Dec 25 01:28 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-154186 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                  │ no-preload-154186            │ jenkins │ v1.37.0 │ 22 Dec 25 01:34 UTC │                     │
	│ stop    │ -p no-preload-154186 --alsologtostderr -v=3                                                                                                                                                                                                              │ no-preload-154186            │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │ 22 Dec 25 01:36 UTC │
	│ addons  │ enable dashboard -p no-preload-154186 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                             │ no-preload-154186            │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │ 22 Dec 25 01:36 UTC │
	│ start   │ -p no-preload-154186 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-154186            │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-869293 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                  │ newest-cni-869293            │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │                     │
	│ stop    │ -p newest-cni-869293 --alsologtostderr -v=3                                                                                                                                                                                                              │ newest-cni-869293            │ jenkins │ v1.37.0 │ 22 Dec 25 01:38 UTC │ 22 Dec 25 01:38 UTC │
	│ addons  │ enable dashboard -p newest-cni-869293 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                             │ newest-cni-869293            │ jenkins │ v1.37.0 │ 22 Dec 25 01:38 UTC │ 22 Dec 25 01:38 UTC │
	│ start   │ -p newest-cni-869293 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ newest-cni-869293            │ jenkins │ v1.37.0 │ 22 Dec 25 01:38 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───
──────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/22 01:38:31
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1222 01:38:31.686572 1685746 out.go:360] Setting OutFile to fd 1 ...
	I1222 01:38:31.686782 1685746 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:38:31.686816 1685746 out.go:374] Setting ErrFile to fd 2...
	I1222 01:38:31.686836 1685746 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:38:31.687133 1685746 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1395000/.minikube/bin
	I1222 01:38:31.687563 1685746 out.go:368] Setting JSON to false
	I1222 01:38:31.688584 1685746 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":116465,"bootTime":1766251047,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1222 01:38:31.688686 1685746 start.go:143] virtualization:  
	I1222 01:38:31.691576 1685746 out.go:179] * [newest-cni-869293] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1222 01:38:31.695464 1685746 out.go:179]   - MINIKUBE_LOCATION=22179
	I1222 01:38:31.695552 1685746 notify.go:221] Checking for updates...
	I1222 01:38:31.701535 1685746 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 01:38:31.704637 1685746 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-1395000/kubeconfig
	I1222 01:38:31.707560 1685746 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1395000/.minikube
	I1222 01:38:31.710534 1685746 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1222 01:38:31.713575 1685746 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 01:38:31.717166 1685746 config.go:182] Loaded profile config "newest-cni-869293": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1222 01:38:31.717762 1685746 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 01:38:31.753414 1685746 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1222 01:38:31.753539 1685746 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:38:31.812499 1685746 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 01:38:31.803096079 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:38:31.812613 1685746 docker.go:319] overlay module found
	I1222 01:38:31.815770 1685746 out.go:179] * Using the docker driver based on existing profile
	I1222 01:38:31.818545 1685746 start.go:309] selected driver: docker
	I1222 01:38:31.818566 1685746 start.go:928] validating driver "docker" against &{Name:newest-cni-869293 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-869293 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:38:31.818662 1685746 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 01:38:31.819384 1685746 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:38:31.880587 1685746 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 01:38:31.870819289 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:38:31.880955 1685746 start_flags.go:1014] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1222 01:38:31.880984 1685746 cni.go:84] Creating CNI manager for ""
	I1222 01:38:31.881038 1685746 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1222 01:38:31.881081 1685746 start.go:353] cluster config:
	{Name:newest-cni-869293 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-869293 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:38:31.884279 1685746 out.go:179] * Starting "newest-cni-869293" primary control-plane node in "newest-cni-869293" cluster
	I1222 01:38:31.887056 1685746 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1222 01:38:31.890043 1685746 out.go:179] * Pulling base image v0.0.48-1766219634-22260 ...
	I1222 01:38:31.892868 1685746 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1222 01:38:31.892919 1685746 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4
	I1222 01:38:31.892932 1685746 cache.go:65] Caching tarball of preloaded images
	I1222 01:38:31.892952 1685746 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1222 01:38:31.893022 1685746 preload.go:251] Found /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1222 01:38:31.893039 1685746 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on containerd
	I1222 01:38:31.893153 1685746 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/config.json ...
	I1222 01:38:31.913018 1685746 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon, skipping pull
	I1222 01:38:31.913041 1685746 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 exists in daemon, skipping load
	I1222 01:38:31.913060 1685746 cache.go:243] Successfully downloaded all kic artifacts
	I1222 01:38:31.913090 1685746 start.go:360] acquireMachinesLock for newest-cni-869293: {Name:mke3b6ee6da4cf7fd78c9e3f2e52f52ceafca24c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:38:31.913180 1685746 start.go:364] duration metric: took 44.275µs to acquireMachinesLock for "newest-cni-869293"
	I1222 01:38:31.913204 1685746 start.go:96] Skipping create...Using existing machine configuration
	I1222 01:38:31.913210 1685746 fix.go:54] fixHost starting: 
	I1222 01:38:31.913477 1685746 cli_runner.go:164] Run: docker container inspect newest-cni-869293 --format={{.State.Status}}
	I1222 01:38:31.930780 1685746 fix.go:112] recreateIfNeeded on newest-cni-869293: state=Stopped err=<nil>
	W1222 01:38:31.930815 1685746 fix.go:138] unexpected machine state, will restart: <nil>
	W1222 01:38:29.750532 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:38:32.248109 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:38:31.934050 1685746 out.go:252] * Restarting existing docker container for "newest-cni-869293" ...
	I1222 01:38:31.934152 1685746 cli_runner.go:164] Run: docker start newest-cni-869293
	I1222 01:38:32.204881 1685746 cli_runner.go:164] Run: docker container inspect newest-cni-869293 --format={{.State.Status}}
	I1222 01:38:32.243691 1685746 kic.go:430] container "newest-cni-869293" state is running.
	I1222 01:38:32.244096 1685746 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-869293
	I1222 01:38:32.265947 1685746 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/config.json ...
	I1222 01:38:32.266210 1685746 machine.go:94] provisionDockerMachine start ...
	I1222 01:38:32.266268 1685746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:38:32.293919 1685746 main.go:144] libmachine: Using SSH client type: native
	I1222 01:38:32.294281 1685746 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38707 <nil> <nil>}
	I1222 01:38:32.294292 1685746 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 01:38:32.294932 1685746 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54476->127.0.0.1:38707: read: connection reset by peer
	I1222 01:38:35.433786 1685746 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-869293
	
	I1222 01:38:35.433813 1685746 ubuntu.go:182] provisioning hostname "newest-cni-869293"
	I1222 01:38:35.433886 1685746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:38:35.451516 1685746 main.go:144] libmachine: Using SSH client type: native
	I1222 01:38:35.451830 1685746 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38707 <nil> <nil>}
	I1222 01:38:35.451848 1685746 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-869293 && echo "newest-cni-869293" | sudo tee /etc/hostname
	I1222 01:38:35.591409 1685746 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-869293
	
	I1222 01:38:35.591519 1685746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:38:35.609341 1685746 main.go:144] libmachine: Using SSH client type: native
	I1222 01:38:35.609647 1685746 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38707 <nil> <nil>}
	I1222 01:38:35.609670 1685746 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-869293' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-869293/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-869293' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1222 01:38:35.742798 1685746 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 01:38:35.742824 1685746 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22179-1395000/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-1395000/.minikube}
	I1222 01:38:35.742864 1685746 ubuntu.go:190] setting up certificates
	I1222 01:38:35.742881 1685746 provision.go:84] configureAuth start
	I1222 01:38:35.742942 1685746 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-869293
	I1222 01:38:35.763152 1685746 provision.go:143] copyHostCerts
	I1222 01:38:35.763214 1685746 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.pem, removing ...
	I1222 01:38:35.763230 1685746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.pem
	I1222 01:38:35.763306 1685746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.pem (1082 bytes)
	I1222 01:38:35.763401 1685746 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1395000/.minikube/cert.pem, removing ...
	I1222 01:38:35.763407 1685746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1395000/.minikube/cert.pem
	I1222 01:38:35.763431 1685746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-1395000/.minikube/cert.pem (1123 bytes)
	I1222 01:38:35.763483 1685746 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1395000/.minikube/key.pem, removing ...
	I1222 01:38:35.763490 1685746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1395000/.minikube/key.pem
	I1222 01:38:35.763514 1685746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-1395000/.minikube/key.pem (1679 bytes)
	I1222 01:38:35.763557 1685746 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca-key.pem org=jenkins.newest-cni-869293 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-869293]
	I1222 01:38:35.889485 1685746 provision.go:177] copyRemoteCerts
	I1222 01:38:35.889557 1685746 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1222 01:38:35.889605 1685746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:38:35.914143 1685746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38707 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/newest-cni-869293/id_rsa Username:docker}
	I1222 01:38:36.016150 1685746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1222 01:38:36.035930 1685746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1222 01:38:36.054716 1685746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1222 01:38:36.072586 1685746 provision.go:87] duration metric: took 329.680992ms to configureAuth
	I1222 01:38:36.072618 1685746 ubuntu.go:206] setting minikube options for container-runtime
	I1222 01:38:36.072830 1685746 config.go:182] Loaded profile config "newest-cni-869293": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1222 01:38:36.072842 1685746 machine.go:97] duration metric: took 3.806623107s to provisionDockerMachine
	I1222 01:38:36.072850 1685746 start.go:293] postStartSetup for "newest-cni-869293" (driver="docker")
	I1222 01:38:36.072866 1685746 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1222 01:38:36.072926 1685746 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1222 01:38:36.072980 1685746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:38:36.090324 1685746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38707 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/newest-cni-869293/id_rsa Username:docker}
	I1222 01:38:36.187013 1685746 ssh_runner.go:195] Run: cat /etc/os-release
	I1222 01:38:36.191029 1685746 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1222 01:38:36.191111 1685746 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1222 01:38:36.191134 1685746 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1395000/.minikube/addons for local assets ...
	I1222 01:38:36.191215 1685746 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1395000/.minikube/files for local assets ...
	I1222 01:38:36.191355 1685746 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem -> 13968642.pem in /etc/ssl/certs
	I1222 01:38:36.191477 1685746 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1222 01:38:36.200008 1685746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem --> /etc/ssl/certs/13968642.pem (1708 bytes)
	I1222 01:38:36.219292 1685746 start.go:296] duration metric: took 146.420744ms for postStartSetup
	I1222 01:38:36.219381 1685746 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 01:38:36.219430 1685746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:38:36.237412 1685746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38707 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/newest-cni-869293/id_rsa Username:docker}
	I1222 01:38:36.336664 1685746 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1222 01:38:36.342619 1685746 fix.go:56] duration metric: took 4.429400761s for fixHost
	I1222 01:38:36.342646 1685746 start.go:83] releasing machines lock for "newest-cni-869293", held for 4.429452897s
	I1222 01:38:36.342750 1685746 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-869293
	I1222 01:38:36.362211 1685746 ssh_runner.go:195] Run: cat /version.json
	I1222 01:38:36.362264 1685746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:38:36.362344 1685746 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1222 01:38:36.362407 1685746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:38:36.385216 1685746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38707 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/newest-cni-869293/id_rsa Username:docker}
	I1222 01:38:36.393122 1685746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38707 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/newest-cni-869293/id_rsa Username:docker}
	I1222 01:38:36.571819 1685746 ssh_runner.go:195] Run: systemctl --version
	I1222 01:38:36.578591 1685746 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1222 01:38:36.583121 1685746 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1222 01:38:36.583193 1685746 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1222 01:38:36.591539 1685746 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1222 01:38:36.591564 1685746 start.go:496] detecting cgroup driver to use...
	I1222 01:38:36.591620 1685746 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 01:38:36.591689 1685746 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1222 01:38:36.609980 1685746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1222 01:38:36.623763 1685746 docker.go:218] disabling cri-docker service (if available) ...
	I1222 01:38:36.623883 1685746 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1222 01:38:36.639236 1685746 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1222 01:38:36.652937 1685746 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1222 01:38:36.763224 1685746 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1222 01:38:36.883204 1685746 docker.go:234] disabling docker service ...
	I1222 01:38:36.883275 1685746 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1222 01:38:36.898372 1685746 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1222 01:38:36.911453 1685746 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1222 01:38:37.034252 1685746 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1222 01:38:37.157335 1685746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1222 01:38:37.170564 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 01:38:37.185195 1685746 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1222 01:38:37.194710 1685746 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1222 01:38:37.204647 1685746 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1222 01:38:37.204731 1685746 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1222 01:38:37.214808 1685746 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1222 01:38:37.223830 1685746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1222 01:38:37.232600 1685746 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1222 01:38:37.242680 1685746 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1222 01:38:37.254369 1685746 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1222 01:38:37.265094 1685746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1222 01:38:37.278711 1685746 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1222 01:38:37.288297 1685746 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1222 01:38:37.299386 1685746 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1222 01:38:37.306803 1685746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:38:37.412668 1685746 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1222 01:38:37.531042 1685746 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1222 01:38:37.531187 1685746 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1222 01:38:37.535291 1685746 start.go:564] Will wait 60s for crictl version
	I1222 01:38:37.535398 1685746 ssh_runner.go:195] Run: which crictl
	I1222 01:38:37.539239 1685746 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1222 01:38:37.568186 1685746 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.1
	RuntimeApiVersion:  v1
	I1222 01:38:37.568329 1685746 ssh_runner.go:195] Run: containerd --version
	I1222 01:38:37.589324 1685746 ssh_runner.go:195] Run: containerd --version
	I1222 01:38:37.614497 1685746 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.1 ...
	I1222 01:38:37.617592 1685746 cli_runner.go:164] Run: docker network inspect newest-cni-869293 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 01:38:37.633737 1685746 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1222 01:38:37.637631 1685746 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 01:38:37.650774 1685746 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1222 01:38:34.249047 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:38:36.748953 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:38:37.653725 1685746 kubeadm.go:884] updating cluster {Name:newest-cni-869293 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-869293 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1222 01:38:37.653882 1685746 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1222 01:38:37.653965 1685746 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 01:38:37.679481 1685746 containerd.go:627] all images are preloaded for containerd runtime.
	I1222 01:38:37.679507 1685746 containerd.go:534] Images already preloaded, skipping extraction
	I1222 01:38:37.679567 1685746 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 01:38:37.707944 1685746 containerd.go:627] all images are preloaded for containerd runtime.
	I1222 01:38:37.707969 1685746 cache_images.go:86] Images are preloaded, skipping loading
	I1222 01:38:37.707979 1685746 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-rc.1 containerd true true} ...
	I1222 01:38:37.708083 1685746 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-869293 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-869293 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1222 01:38:37.708165 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
	I1222 01:38:37.740577 1685746 cni.go:84] Creating CNI manager for ""
	I1222 01:38:37.740600 1685746 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1222 01:38:37.740621 1685746 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1222 01:38:37.740645 1685746 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-869293 NodeName:newest-cni-869293 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1222 01:38:37.740759 1685746 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-869293"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1222 01:38:37.740831 1685746 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1222 01:38:37.749395 1685746 binaries.go:51] Found k8s binaries, skipping transfer
	I1222 01:38:37.749470 1685746 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1222 01:38:37.757587 1685746 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1222 01:38:37.770794 1685746 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1222 01:38:37.784049 1685746 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2233 bytes)
	I1222 01:38:37.797792 1685746 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1222 01:38:37.801552 1685746 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 01:38:37.811598 1685746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:38:37.940636 1685746 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 01:38:37.962625 1685746 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293 for IP: 192.168.76.2
	I1222 01:38:37.962649 1685746 certs.go:195] generating shared ca certs ...
	I1222 01:38:37.962682 1685746 certs.go:227] acquiring lock for ca certs: {Name:mk4e1172e73c8d9b926824a39d7e920772302ed7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:38:37.962837 1685746 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.key
	I1222 01:38:37.962900 1685746 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.key
	I1222 01:38:37.962912 1685746 certs.go:257] generating profile certs ...
	I1222 01:38:37.963014 1685746 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/client.key
	I1222 01:38:37.963084 1685746 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/apiserver.key.70db33ce
	I1222 01:38:37.963128 1685746 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/proxy-client.key
	I1222 01:38:37.963238 1685746 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/1396864.pem (1338 bytes)
	W1222 01:38:37.963276 1685746 certs.go:480] ignoring /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/1396864_empty.pem, impossibly tiny 0 bytes
	I1222 01:38:37.963287 1685746 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca-key.pem (1675 bytes)
	I1222 01:38:37.963316 1685746 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem (1082 bytes)
	I1222 01:38:37.963343 1685746 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem (1123 bytes)
	I1222 01:38:37.963379 1685746 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/key.pem (1679 bytes)
	I1222 01:38:37.963434 1685746 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem (1708 bytes)
	I1222 01:38:37.964596 1685746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1222 01:38:37.999913 1685746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1222 01:38:38.025465 1685746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1222 01:38:38.053443 1685746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1222 01:38:38.087732 1685746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1222 01:38:38.107200 1685746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1222 01:38:38.125482 1685746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1222 01:38:38.143284 1685746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1222 01:38:38.161557 1685746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem --> /usr/share/ca-certificates/13968642.pem (1708 bytes)
	I1222 01:38:38.180124 1685746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1222 01:38:38.198446 1685746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/1396864.pem --> /usr/share/ca-certificates/1396864.pem (1338 bytes)
	I1222 01:38:38.215766 1685746 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1222 01:38:38.228774 1685746 ssh_runner.go:195] Run: openssl version
	I1222 01:38:38.235631 1685746 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/13968642.pem
	I1222 01:38:38.244039 1685746 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/13968642.pem /etc/ssl/certs/13968642.pem
	I1222 01:38:38.252123 1685746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13968642.pem
	I1222 01:38:38.256169 1685746 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 00:13 /usr/share/ca-certificates/13968642.pem
	I1222 01:38:38.256240 1685746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13968642.pem
	I1222 01:38:38.297738 1685746 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1222 01:38:38.305673 1685746 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:38:38.313250 1685746 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1222 01:38:38.321143 1685746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:38:38.325161 1685746 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 00:04 /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:38:38.325259 1685746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:38:38.366760 1685746 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1222 01:38:38.375589 1685746 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1396864.pem
	I1222 01:38:38.383142 1685746 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1396864.pem /etc/ssl/certs/1396864.pem
	I1222 01:38:38.391262 1685746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1396864.pem
	I1222 01:38:38.395405 1685746 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 00:13 /usr/share/ca-certificates/1396864.pem
	I1222 01:38:38.395474 1685746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1396864.pem
	I1222 01:38:38.436708 1685746 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1222 01:38:38.444445 1685746 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 01:38:38.448390 1685746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1222 01:38:38.489618 1685746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1222 01:38:38.530725 1685746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1222 01:38:38.571636 1685746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1222 01:38:38.612592 1685746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1222 01:38:38.653872 1685746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1222 01:38:38.695135 1685746 kubeadm.go:401] StartCluster: {Name:newest-cni-869293 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-869293 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.
L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:38:38.695236 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1222 01:38:38.695304 1685746 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 01:38:38.730406 1685746 cri.go:96] found id: ""
	I1222 01:38:38.730480 1685746 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1222 01:38:38.742929 1685746 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1222 01:38:38.742952 1685746 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1222 01:38:38.743012 1685746 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1222 01:38:38.765617 1685746 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1222 01:38:38.766245 1685746 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-869293" does not appear in /home/jenkins/minikube-integration/22179-1395000/kubeconfig
	I1222 01:38:38.766510 1685746 kubeconfig.go:62] /home/jenkins/minikube-integration/22179-1395000/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-869293" cluster setting kubeconfig missing "newest-cni-869293" context setting]
	I1222 01:38:38.766957 1685746 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/kubeconfig: {Name:mk08a9ce05fb3d2aec66c2d4776a38c1f64c42df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:38:38.768687 1685746 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1222 01:38:38.776658 1685746 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1222 01:38:38.776695 1685746 kubeadm.go:602] duration metric: took 33.737033ms to restartPrimaryControlPlane
	I1222 01:38:38.776705 1685746 kubeadm.go:403] duration metric: took 81.581475ms to StartCluster
	I1222 01:38:38.776720 1685746 settings.go:142] acquiring lock: {Name:mk10fbc51e5384d725fd46ab36048358281cb87b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:38:38.776793 1685746 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22179-1395000/kubeconfig
	I1222 01:38:38.777670 1685746 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/kubeconfig: {Name:mk08a9ce05fb3d2aec66c2d4776a38c1f64c42df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:38:38.777888 1685746 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1222 01:38:38.778285 1685746 config.go:182] Loaded profile config "newest-cni-869293": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1222 01:38:38.778259 1685746 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1222 01:38:38.778393 1685746 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-869293"
	I1222 01:38:38.778408 1685746 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-869293"
	I1222 01:38:38.778433 1685746 host.go:66] Checking if "newest-cni-869293" exists ...
	I1222 01:38:38.778917 1685746 cli_runner.go:164] Run: docker container inspect newest-cni-869293 --format={{.State.Status}}
	I1222 01:38:38.779098 1685746 addons.go:70] Setting dashboard=true in profile "newest-cni-869293"
	I1222 01:38:38.779126 1685746 addons.go:239] Setting addon dashboard=true in "newest-cni-869293"
	W1222 01:38:38.779211 1685746 addons.go:248] addon dashboard should already be in state true
	I1222 01:38:38.779264 1685746 host.go:66] Checking if "newest-cni-869293" exists ...
	I1222 01:38:38.779355 1685746 addons.go:70] Setting default-storageclass=true in profile "newest-cni-869293"
	I1222 01:38:38.779382 1685746 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-869293"
	I1222 01:38:38.779657 1685746 cli_runner.go:164] Run: docker container inspect newest-cni-869293 --format={{.State.Status}}
	I1222 01:38:38.780717 1685746 cli_runner.go:164] Run: docker container inspect newest-cni-869293 --format={{.State.Status}}
	I1222 01:38:38.783183 1685746 out.go:179] * Verifying Kubernetes components...
	I1222 01:38:38.795835 1685746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:38:38.839727 1685746 addons.go:239] Setting addon default-storageclass=true in "newest-cni-869293"
	I1222 01:38:38.839773 1685746 host.go:66] Checking if "newest-cni-869293" exists ...
	I1222 01:38:38.844706 1685746 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1222 01:38:38.844788 1685746 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1222 01:38:38.845056 1685746 cli_runner.go:164] Run: docker container inspect newest-cni-869293 --format={{.State.Status}}
	I1222 01:38:38.847706 1685746 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 01:38:38.847732 1685746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1222 01:38:38.847798 1685746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:38:38.850623 1685746 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1222 01:38:38.856243 1685746 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1222 01:38:38.856273 1685746 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1222 01:38:38.856351 1685746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:38:38.873943 1685746 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1222 01:38:38.873976 1685746 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1222 01:38:38.874046 1685746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:38:38.897069 1685746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38707 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/newest-cni-869293/id_rsa Username:docker}
	I1222 01:38:38.917887 1685746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38707 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/newest-cni-869293/id_rsa Username:docker}
	I1222 01:38:38.925239 1685746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38707 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/newest-cni-869293/id_rsa Username:docker}
	I1222 01:38:39.040289 1685746 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 01:38:39.062591 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 01:38:39.071403 1685746 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1222 01:38:39.071429 1685746 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1222 01:38:39.085714 1685746 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1222 01:38:39.085742 1685746 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1222 01:38:39.113564 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1222 01:38:39.117642 1685746 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1222 01:38:39.117668 1685746 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1222 01:38:39.160317 1685746 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1222 01:38:39.160342 1685746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1222 01:38:39.179666 1685746 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1222 01:38:39.179693 1685746 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1222 01:38:39.195940 1685746 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1222 01:38:39.195967 1685746 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1222 01:38:39.211128 1685746 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1222 01:38:39.211152 1685746 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1222 01:38:39.229341 1685746 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1222 01:38:39.229367 1685746 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1222 01:38:39.242863 1685746 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1222 01:38:39.242891 1685746 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1222 01:38:39.257396 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1222 01:38:39.740898 1685746 api_server.go:52] waiting for apiserver process to appear ...
	I1222 01:38:39.740996 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:38:39.741091 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:38:39.741148 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:39.741150 1685746 retry.go:84] will retry after 300ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:38:39.741362 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:39.924082 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:38:39.987453 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:40.012530 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 01:38:40.076254 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:38:40.106299 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:38:40.156991 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:40.241110 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:40.291973 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1222 01:38:40.350617 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:38:40.361182 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:40.389531 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:38:40.437774 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:38:40.465333 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:40.692837 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1222 01:38:40.741460 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:38:40.766384 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:40.961925 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 01:38:40.997418 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:38:41.047996 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:38:41.103696 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:41.241962 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:41.674831 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:38:39.248045 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:38:41.248244 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:38:41.741299 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:38:41.744404 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:42.118142 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:38:42.189177 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:42.241414 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:42.263947 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:38:42.333305 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:42.741698 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:43.241589 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:43.265699 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:38:43.338843 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:43.509282 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 01:38:43.559893 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:38:43.581660 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:38:43.623026 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:43.741112 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:44.241130 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:44.741229 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:44.931703 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:38:45.008485 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:45.244431 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:45.741178 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:45.765524 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:38:45.843868 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:45.977122 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:38:46.040374 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:46.241453 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:46.486248 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:38:46.559168 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:38:43.248311 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:38:45.249134 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:38:47.748896 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:38:46.741869 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:47.241095 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:47.741431 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:48.241112 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:48.294921 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:38:48.361284 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:48.741773 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:48.852570 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:38:48.911873 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:49.241377 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:49.368148 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:38:49.429800 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:49.741220 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:50.241219 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:50.741547 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:51.241159 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:38:49.748932 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:38:52.248838 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:38:51.741774 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:52.241901 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:52.391494 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:38:52.452597 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:52.452636 1685746 retry.go:84] will retry after 11.6s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:52.508552 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:38:52.579056 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:52.741603 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:53.241037 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:53.297681 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:38:53.358617 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:53.741128 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:54.241259 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:54.741444 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:55.241131 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:55.741185 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:56.241903 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:38:54.748014 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:38:56.748733 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:38:56.742022 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:56.871217 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:38:56.931377 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:56.931421 1685746 retry.go:84] will retry after 12.9s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:57.241904 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:57.741132 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:58.241082 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:58.741129 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:59.241514 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:59.741571 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:00.241104 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:00.342627 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:39:00.433191 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:39:00.433235 1685746 retry.go:84] will retry after 8.1s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:39:00.741833 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:01.241455 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:38:59.248212 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:39:01.248492 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:39:01.741502 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:02.241599 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:02.741070 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:03.241152 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:03.741076 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:04.041996 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:39:04.111760 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:39:04.111812 1685746 retry.go:84] will retry after 10s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:39:04.242089 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:04.741350 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:05.241736 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:05.741098 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:06.241279 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:39:03.747982 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:39:05.748583 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:39:07.748998 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:39:06.742311 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:07.241927 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:07.741133 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:08.241157 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:08.532510 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:39:08.603273 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:39:08.603314 1685746 retry.go:84] will retry after 7.8s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:39:08.741625 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:09.241616 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:09.741180 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:09.845450 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:39:09.907468 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:39:10.242040 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:10.742004 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:11.242043 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:39:10.248934 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:39:12.748076 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:39:11.741028 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:12.241114 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:12.741779 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:13.241398 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:13.741757 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:14.084932 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:39:14.149870 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:39:14.149915 1685746 retry.go:84] will retry after 13.2s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:39:14.241288 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:14.742009 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:15.241500 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:15.741459 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:16.241659 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:16.395227 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:39:16.456949 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:39:14.748959 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:39:17.248674 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:39:16.741507 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:17.241459 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:17.741042 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:18.241111 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:18.741162 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:19.241875 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:19.741715 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:20.241732 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:20.741105 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:21.241347 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:39:19.748622 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:39:21.749035 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:39:21.741639 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:22.241911 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:22.742051 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:23.241970 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:23.741127 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:24.241560 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:24.741692 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:25.241106 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:25.741122 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:26.241137 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:39:24.248544 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:39:26.747990 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:39:26.741585 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:27.241155 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:27.301256 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:39:27.375517 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:39:27.375598 1685746 retry.go:84] will retry after 26.5s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:39:27.741076 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:28.241034 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:28.741642 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:29.226555 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 01:39:29.242011 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:39:29.291422 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:39:29.741622 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:30.245888 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:30.741105 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:31.241550 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:39:28.748186 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:39:30.748280 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:39:32.748600 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:39:31.741066 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:32.241183 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:32.741695 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:33.241134 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:33.741807 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:34.241685 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:34.741125 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:35.241915 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:35.741241 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:36.241639 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:39:35.249008 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:39:37.748582 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:39:36.741652 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:37.241141 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:37.741891 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:38.054310 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:39:38.118505 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:39:38.118547 1685746 retry.go:84] will retry after 47.2s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:39:38.241764 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:38.741459 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:39.241609 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:39:39.241696 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:39:39.269891 1685746 cri.go:96] found id: ""
	I1222 01:39:39.269914 1685746 logs.go:282] 0 containers: []
	W1222 01:39:39.269923 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:39:39.269930 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:39:39.269991 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:39:39.300389 1685746 cri.go:96] found id: ""
	I1222 01:39:39.300414 1685746 logs.go:282] 0 containers: []
	W1222 01:39:39.300423 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:39:39.300430 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:39:39.300501 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:39:39.326557 1685746 cri.go:96] found id: ""
	I1222 01:39:39.326582 1685746 logs.go:282] 0 containers: []
	W1222 01:39:39.326592 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:39:39.326598 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:39:39.326697 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:39:39.354049 1685746 cri.go:96] found id: ""
	I1222 01:39:39.354115 1685746 logs.go:282] 0 containers: []
	W1222 01:39:39.354125 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:39:39.354132 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:39:39.354202 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:39:39.380457 1685746 cri.go:96] found id: ""
	I1222 01:39:39.380490 1685746 logs.go:282] 0 containers: []
	W1222 01:39:39.380500 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:39:39.380507 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:39:39.380577 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:39:39.407039 1685746 cri.go:96] found id: ""
	I1222 01:39:39.407062 1685746 logs.go:282] 0 containers: []
	W1222 01:39:39.407070 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:39:39.407076 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:39:39.407139 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:39:39.431541 1685746 cri.go:96] found id: ""
	I1222 01:39:39.431568 1685746 logs.go:282] 0 containers: []
	W1222 01:39:39.431577 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:39:39.431584 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:39:39.431676 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:39:39.457555 1685746 cri.go:96] found id: ""
	I1222 01:39:39.457588 1685746 logs.go:282] 0 containers: []
	W1222 01:39:39.457607 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:39:39.457616 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:39:39.457629 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:39:39.517907 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:39:39.517997 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:39:39.534348 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:39:39.534373 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:39:39.607407 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:39:39.598204    1848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:39.599085    1848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:39.600659    1848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:39.601166    1848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:39.602826    1848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:39:39.598204    1848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:39.599085    1848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:39.600659    1848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:39.601166    1848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:39.602826    1848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:39:39.607438 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:39:39.607463 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:39:39.634050 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:39:39.634094 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 01:39:40.248054 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:39:42.748083 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:39:42.163786 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:42.176868 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:39:42.176959 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:39:42.208642 1685746 cri.go:96] found id: ""
	I1222 01:39:42.208672 1685746 logs.go:282] 0 containers: []
	W1222 01:39:42.208682 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:39:42.208688 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:39:42.208757 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:39:42.249523 1685746 cri.go:96] found id: ""
	I1222 01:39:42.249552 1685746 logs.go:282] 0 containers: []
	W1222 01:39:42.249562 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:39:42.249569 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:39:42.249641 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:39:42.283515 1685746 cri.go:96] found id: ""
	I1222 01:39:42.283542 1685746 logs.go:282] 0 containers: []
	W1222 01:39:42.283550 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:39:42.283557 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:39:42.283659 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:39:42.312237 1685746 cri.go:96] found id: ""
	I1222 01:39:42.312260 1685746 logs.go:282] 0 containers: []
	W1222 01:39:42.312269 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:39:42.312276 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:39:42.312335 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:39:42.341269 1685746 cri.go:96] found id: ""
	I1222 01:39:42.341297 1685746 logs.go:282] 0 containers: []
	W1222 01:39:42.341306 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:39:42.341312 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:39:42.341374 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:39:42.367696 1685746 cri.go:96] found id: ""
	I1222 01:39:42.367723 1685746 logs.go:282] 0 containers: []
	W1222 01:39:42.367732 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:39:42.367739 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:39:42.367804 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:39:42.396577 1685746 cri.go:96] found id: ""
	I1222 01:39:42.396602 1685746 logs.go:282] 0 containers: []
	W1222 01:39:42.396612 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:39:42.396618 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:39:42.396689 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:39:42.426348 1685746 cri.go:96] found id: ""
	I1222 01:39:42.426380 1685746 logs.go:282] 0 containers: []
	W1222 01:39:42.426392 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:39:42.426413 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:39:42.426433 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:39:42.481969 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:39:42.482005 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:39:42.499357 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:39:42.499436 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:39:42.576627 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:39:42.568338    1966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:42.569235    1966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:42.570839    1966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:42.571221    1966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:42.572936    1966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:39:42.568338    1966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:42.569235    1966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:42.570839    1966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:42.571221    1966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:42.572936    1966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:39:42.576649 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:39:42.576663 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:39:42.601751 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:39:42.601784 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:39:45.131239 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:45.157288 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:39:45.157379 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:39:45.207917 1685746 cri.go:96] found id: ""
	I1222 01:39:45.207953 1685746 logs.go:282] 0 containers: []
	W1222 01:39:45.207963 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:39:45.207975 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:39:45.208042 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:39:45.255413 1685746 cri.go:96] found id: ""
	I1222 01:39:45.255448 1685746 logs.go:282] 0 containers: []
	W1222 01:39:45.255459 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:39:45.255467 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:39:45.255564 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:39:45.300163 1685746 cri.go:96] found id: ""
	I1222 01:39:45.300196 1685746 logs.go:282] 0 containers: []
	W1222 01:39:45.300206 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:39:45.300214 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:39:45.300285 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:39:45.348918 1685746 cri.go:96] found id: ""
	I1222 01:39:45.348943 1685746 logs.go:282] 0 containers: []
	W1222 01:39:45.348952 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:39:45.348959 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:39:45.349022 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:39:45.379477 1685746 cri.go:96] found id: ""
	I1222 01:39:45.379502 1685746 logs.go:282] 0 containers: []
	W1222 01:39:45.379512 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:39:45.379518 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:39:45.379580 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:39:45.410514 1685746 cri.go:96] found id: ""
	I1222 01:39:45.410535 1685746 logs.go:282] 0 containers: []
	W1222 01:39:45.410543 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:39:45.410550 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:39:45.410611 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:39:45.436661 1685746 cri.go:96] found id: ""
	I1222 01:39:45.436686 1685746 logs.go:282] 0 containers: []
	W1222 01:39:45.436695 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:39:45.436702 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:39:45.436769 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:39:45.466972 1685746 cri.go:96] found id: ""
	I1222 01:39:45.467001 1685746 logs.go:282] 0 containers: []
	W1222 01:39:45.467010 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:39:45.467019 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:39:45.467032 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:39:45.567688 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:39:45.558837    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:45.559539    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:45.561202    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:45.561740    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:45.563191    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:39:45.558837    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:45.559539    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:45.561202    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:45.561740    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:45.563191    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:39:45.567712 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:39:45.567731 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:39:45.593712 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:39:45.593757 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:39:45.626150 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:39:45.626179 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:39:45.681273 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:39:45.681310 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1222 01:39:44.748908 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:39:47.248090 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:39:48.196684 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:48.207640 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:39:48.207718 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:39:48.232650 1685746 cri.go:96] found id: ""
	I1222 01:39:48.232680 1685746 logs.go:282] 0 containers: []
	W1222 01:39:48.232688 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:39:48.232708 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:39:48.232772 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:39:48.264801 1685746 cri.go:96] found id: ""
	I1222 01:39:48.264831 1685746 logs.go:282] 0 containers: []
	W1222 01:39:48.264841 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:39:48.264848 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:39:48.264915 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:39:48.300270 1685746 cri.go:96] found id: ""
	I1222 01:39:48.300300 1685746 logs.go:282] 0 containers: []
	W1222 01:39:48.300310 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:39:48.300317 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:39:48.300388 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:39:48.334711 1685746 cri.go:96] found id: ""
	I1222 01:39:48.334782 1685746 logs.go:282] 0 containers: []
	W1222 01:39:48.334806 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:39:48.334821 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:39:48.334898 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:39:48.359955 1685746 cri.go:96] found id: ""
	I1222 01:39:48.360023 1685746 logs.go:282] 0 containers: []
	W1222 01:39:48.360038 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:39:48.360052 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:39:48.360124 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:39:48.386551 1685746 cri.go:96] found id: ""
	I1222 01:39:48.386574 1685746 logs.go:282] 0 containers: []
	W1222 01:39:48.386583 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:39:48.386589 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:39:48.386648 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:39:48.412026 1685746 cri.go:96] found id: ""
	I1222 01:39:48.412052 1685746 logs.go:282] 0 containers: []
	W1222 01:39:48.412062 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:39:48.412069 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:39:48.412129 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:39:48.440847 1685746 cri.go:96] found id: ""
	I1222 01:39:48.440870 1685746 logs.go:282] 0 containers: []
	W1222 01:39:48.440878 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:39:48.440887 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:39:48.440897 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:39:48.496591 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:39:48.496673 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:39:48.512755 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:39:48.512834 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:39:48.596174 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:39:48.587854    2187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:48.588773    2187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:48.590542    2187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:48.590879    2187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:48.592406    2187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:39:48.587854    2187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:48.588773    2187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:48.590542    2187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:48.590879    2187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:48.592406    2187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:39:48.596249 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:39:48.596281 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:39:48.621362 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:39:48.621397 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:39:51.155431 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:51.169542 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:39:51.169616 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:39:51.195476 1685746 cri.go:96] found id: ""
	I1222 01:39:51.195500 1685746 logs.go:282] 0 containers: []
	W1222 01:39:51.195509 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:39:51.195516 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:39:51.195585 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:39:51.220215 1685746 cri.go:96] found id: ""
	I1222 01:39:51.220240 1685746 logs.go:282] 0 containers: []
	W1222 01:39:51.220249 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:39:51.220255 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:39:51.220324 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:39:51.248478 1685746 cri.go:96] found id: ""
	I1222 01:39:51.248508 1685746 logs.go:282] 0 containers: []
	W1222 01:39:51.248527 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:39:51.248534 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:39:51.248594 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:39:51.282587 1685746 cri.go:96] found id: ""
	I1222 01:39:51.282615 1685746 logs.go:282] 0 containers: []
	W1222 01:39:51.282624 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:39:51.282630 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:39:51.282691 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:39:51.310999 1685746 cri.go:96] found id: ""
	I1222 01:39:51.311029 1685746 logs.go:282] 0 containers: []
	W1222 01:39:51.311038 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:39:51.311044 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:39:51.311105 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:39:51.338337 1685746 cri.go:96] found id: ""
	I1222 01:39:51.338414 1685746 logs.go:282] 0 containers: []
	W1222 01:39:51.338431 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:39:51.338438 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:39:51.338517 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:39:51.365554 1685746 cri.go:96] found id: ""
	I1222 01:39:51.365582 1685746 logs.go:282] 0 containers: []
	W1222 01:39:51.365591 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:39:51.365598 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:39:51.365656 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:39:51.389874 1685746 cri.go:96] found id: ""
	I1222 01:39:51.389903 1685746 logs.go:282] 0 containers: []
	W1222 01:39:51.389913 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:39:51.389922 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:39:51.389933 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:39:51.449732 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:39:51.449797 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:39:51.467573 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:39:51.467669 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:39:51.568437 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:39:51.561021    2299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:51.561697    2299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:51.563323    2299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:51.563918    2299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:51.564974    2299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:39:51.561021    2299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:51.561697    2299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:51.563323    2299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:51.563918    2299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:51.564974    2299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:39:51.568512 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:39:51.568561 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:39:51.595758 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:39:51.595841 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 01:39:49.249046 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:39:51.748032 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:39:53.905270 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:39:53.968241 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:39:53.968406 1685746 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1222 01:39:54.129563 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:54.143910 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:39:54.144012 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:39:54.169973 1685746 cri.go:96] found id: ""
	I1222 01:39:54.170009 1685746 logs.go:282] 0 containers: []
	W1222 01:39:54.170018 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:39:54.170042 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:39:54.170158 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:39:54.198811 1685746 cri.go:96] found id: ""
	I1222 01:39:54.198838 1685746 logs.go:282] 0 containers: []
	W1222 01:39:54.198847 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:39:54.198854 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:39:54.198917 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:39:54.224425 1685746 cri.go:96] found id: ""
	I1222 01:39:54.224452 1685746 logs.go:282] 0 containers: []
	W1222 01:39:54.224462 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:39:54.224468 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:39:54.224549 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:39:54.273957 1685746 cri.go:96] found id: ""
	I1222 01:39:54.273983 1685746 logs.go:282] 0 containers: []
	W1222 01:39:54.273992 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:39:54.273998 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:39:54.274059 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:39:54.306801 1685746 cri.go:96] found id: ""
	I1222 01:39:54.306826 1685746 logs.go:282] 0 containers: []
	W1222 01:39:54.306836 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:39:54.306842 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:39:54.306916 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:39:54.339513 1685746 cri.go:96] found id: ""
	I1222 01:39:54.339539 1685746 logs.go:282] 0 containers: []
	W1222 01:39:54.339548 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:39:54.339555 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:39:54.339617 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:39:54.365259 1685746 cri.go:96] found id: ""
	I1222 01:39:54.365285 1685746 logs.go:282] 0 containers: []
	W1222 01:39:54.365295 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:39:54.365301 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:39:54.365363 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:39:54.390271 1685746 cri.go:96] found id: ""
	I1222 01:39:54.390294 1685746 logs.go:282] 0 containers: []
	W1222 01:39:54.390303 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:39:54.390312 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:39:54.390324 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:39:54.445696 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:39:54.445728 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:39:54.460676 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:39:54.460751 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:39:54.537038 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:39:54.528481    2418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:54.529359    2418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:54.531148    2418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:54.531443    2418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:54.533489    2418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:39:54.528481    2418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:54.529359    2418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:54.531148    2418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:54.531443    2418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:54.533489    2418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:39:54.537060 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:39:54.537075 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:39:54.566201 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:39:54.566234 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 01:39:53.749035 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:39:56.248725 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:39:57.093953 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:57.104681 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:39:57.104755 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:39:57.132428 1685746 cri.go:96] found id: ""
	I1222 01:39:57.132455 1685746 logs.go:282] 0 containers: []
	W1222 01:39:57.132465 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:39:57.132472 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:39:57.132532 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:39:57.158487 1685746 cri.go:96] found id: ""
	I1222 01:39:57.158512 1685746 logs.go:282] 0 containers: []
	W1222 01:39:57.158521 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:39:57.158528 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:39:57.158589 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:39:57.184175 1685746 cri.go:96] found id: ""
	I1222 01:39:57.184203 1685746 logs.go:282] 0 containers: []
	W1222 01:39:57.184213 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:39:57.184219 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:39:57.184279 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:39:57.215724 1685746 cri.go:96] found id: ""
	I1222 01:39:57.215752 1685746 logs.go:282] 0 containers: []
	W1222 01:39:57.215761 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:39:57.215768 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:39:57.215830 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:39:57.252375 1685746 cri.go:96] found id: ""
	I1222 01:39:57.252408 1685746 logs.go:282] 0 containers: []
	W1222 01:39:57.252420 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:39:57.252427 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:39:57.252499 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:39:57.291286 1685746 cri.go:96] found id: ""
	I1222 01:39:57.291323 1685746 logs.go:282] 0 containers: []
	W1222 01:39:57.291333 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:39:57.291344 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:39:57.291408 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:39:57.322496 1685746 cri.go:96] found id: ""
	I1222 01:39:57.322577 1685746 logs.go:282] 0 containers: []
	W1222 01:39:57.322594 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:39:57.322602 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:39:57.322678 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:39:57.352695 1685746 cri.go:96] found id: ""
	I1222 01:39:57.352722 1685746 logs.go:282] 0 containers: []
	W1222 01:39:57.352731 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:39:57.352741 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:39:57.352754 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:39:57.410232 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:39:57.410271 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:39:57.425451 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:39:57.425481 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:39:57.498123 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:39:57.489260    2529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:57.490135    2529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:57.491903    2529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:57.492235    2529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:57.493977    2529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:39:57.489260    2529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:57.490135    2529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:57.491903    2529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:57.492235    2529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:57.493977    2529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:39:57.498197 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:39:57.498226 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:39:57.530586 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:39:57.530677 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:40:00.062361 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:00.152699 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:00.152784 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:00.243584 1685746 cri.go:96] found id: ""
	I1222 01:40:00.243618 1685746 logs.go:282] 0 containers: []
	W1222 01:40:00.243635 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:00.243645 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:00.243728 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:00.323644 1685746 cri.go:96] found id: ""
	I1222 01:40:00.323704 1685746 logs.go:282] 0 containers: []
	W1222 01:40:00.323720 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:00.323730 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:00.323805 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:00.411473 1685746 cri.go:96] found id: ""
	I1222 01:40:00.411502 1685746 logs.go:282] 0 containers: []
	W1222 01:40:00.411521 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:00.411532 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:00.411621 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:00.511894 1685746 cri.go:96] found id: ""
	I1222 01:40:00.511922 1685746 logs.go:282] 0 containers: []
	W1222 01:40:00.511933 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:00.511941 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:00.512015 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:00.575706 1685746 cri.go:96] found id: ""
	I1222 01:40:00.575736 1685746 logs.go:282] 0 containers: []
	W1222 01:40:00.575746 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:00.575753 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:00.575828 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:00.666886 1685746 cri.go:96] found id: ""
	I1222 01:40:00.666913 1685746 logs.go:282] 0 containers: []
	W1222 01:40:00.666922 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:00.666929 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:00.667011 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:00.704456 1685746 cri.go:96] found id: ""
	I1222 01:40:00.704490 1685746 logs.go:282] 0 containers: []
	W1222 01:40:00.704499 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:00.704513 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:00.704583 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:00.763369 1685746 cri.go:96] found id: ""
	I1222 01:40:00.763404 1685746 logs.go:282] 0 containers: []
	W1222 01:40:00.763415 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:00.763425 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:00.763439 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:00.822507 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:00.822546 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:40:00.839492 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:00.839529 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:00.911350 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:00.902540    2639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:00.903387    2639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:00.905036    2639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:00.905557    2639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:00.907072    2639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:00.902540    2639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:00.903387    2639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:00.905036    2639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:00.905557    2639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:00.907072    2639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:00.911374 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:00.911389 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:40:00.937901 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:00.937953 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:40:01.674108 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:39:58.748290 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:40:00.756406 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:40:01.748211 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:40:01.748257 1685746 retry.go:84] will retry after 28.8s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:40:03.469297 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:03.480071 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:03.480145 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:03.519512 1685746 cri.go:96] found id: ""
	I1222 01:40:03.519627 1685746 logs.go:282] 0 containers: []
	W1222 01:40:03.519661 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:03.519709 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:03.520078 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:03.555737 1685746 cri.go:96] found id: ""
	I1222 01:40:03.555763 1685746 logs.go:282] 0 containers: []
	W1222 01:40:03.555806 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:03.555819 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:03.555909 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:03.580955 1685746 cri.go:96] found id: ""
	I1222 01:40:03.580986 1685746 logs.go:282] 0 containers: []
	W1222 01:40:03.580995 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:03.581004 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:03.581068 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:03.610855 1685746 cri.go:96] found id: ""
	I1222 01:40:03.610935 1685746 logs.go:282] 0 containers: []
	W1222 01:40:03.610952 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:03.610961 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:03.611037 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:03.635994 1685746 cri.go:96] found id: ""
	I1222 01:40:03.636019 1685746 logs.go:282] 0 containers: []
	W1222 01:40:03.636027 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:03.636033 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:03.636103 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:03.661008 1685746 cri.go:96] found id: ""
	I1222 01:40:03.661086 1685746 logs.go:282] 0 containers: []
	W1222 01:40:03.661109 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:03.661132 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:03.661249 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:03.685551 1685746 cri.go:96] found id: ""
	I1222 01:40:03.685577 1685746 logs.go:282] 0 containers: []
	W1222 01:40:03.685586 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:03.685594 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:03.685653 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:03.710025 1685746 cri.go:96] found id: ""
	I1222 01:40:03.710054 1685746 logs.go:282] 0 containers: []
	W1222 01:40:03.710063 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:03.710073 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:03.710109 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:40:03.748992 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:03.749066 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:03.812952 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:03.812990 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:40:03.828176 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:03.828207 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:03.895557 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:03.885860    2768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:03.886437    2768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:03.889658    2768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:03.890260    2768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:03.891906    2768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:03.885860    2768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:03.886437    2768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:03.889658    2768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:03.890260    2768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:03.891906    2768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:03.895583 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:03.895596 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:40:06.421124 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:06.432321 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:06.432435 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:06.458845 1685746 cri.go:96] found id: ""
	I1222 01:40:06.458926 1685746 logs.go:282] 0 containers: []
	W1222 01:40:06.458944 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:06.458951 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:06.459024 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:06.483853 1685746 cri.go:96] found id: ""
	I1222 01:40:06.483881 1685746 logs.go:282] 0 containers: []
	W1222 01:40:06.483890 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:06.483897 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:06.483956 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:06.518710 1685746 cri.go:96] found id: ""
	I1222 01:40:06.518741 1685746 logs.go:282] 0 containers: []
	W1222 01:40:06.518750 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:06.518757 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:06.518821 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:06.549152 1685746 cri.go:96] found id: ""
	I1222 01:40:06.549183 1685746 logs.go:282] 0 containers: []
	W1222 01:40:06.549191 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:06.549198 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:06.549256 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:06.579003 1685746 cri.go:96] found id: ""
	I1222 01:40:06.579032 1685746 logs.go:282] 0 containers: []
	W1222 01:40:06.579041 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:06.579048 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:06.579110 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:06.614999 1685746 cri.go:96] found id: ""
	I1222 01:40:06.615029 1685746 logs.go:282] 0 containers: []
	W1222 01:40:06.615038 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:06.615045 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:06.615109 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:06.644049 1685746 cri.go:96] found id: ""
	I1222 01:40:06.644073 1685746 logs.go:282] 0 containers: []
	W1222 01:40:06.644082 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:06.644088 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:06.644150 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:06.670551 1685746 cri.go:96] found id: ""
	I1222 01:40:06.670580 1685746 logs.go:282] 0 containers: []
	W1222 01:40:06.670590 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:06.670599 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:06.670630 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1222 01:40:03.248649 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:40:05.249130 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:40:07.749016 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:40:06.696127 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:06.696164 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:40:06.728583 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:06.728612 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:06.788068 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:06.788103 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:40:06.805676 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:06.805708 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:06.875097 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:06.866896    2880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:06.867468    2880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:06.869095    2880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:06.869679    2880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:06.871268    2880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:06.866896    2880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:06.867468    2880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:06.869095    2880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:06.869679    2880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:06.871268    2880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:09.375863 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:09.386805 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:09.386883 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:09.413272 1685746 cri.go:96] found id: ""
	I1222 01:40:09.413299 1685746 logs.go:282] 0 containers: []
	W1222 01:40:09.413307 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:09.413313 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:09.413374 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:09.438591 1685746 cri.go:96] found id: ""
	I1222 01:40:09.438615 1685746 logs.go:282] 0 containers: []
	W1222 01:40:09.438623 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:09.438630 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:09.438692 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:09.463919 1685746 cri.go:96] found id: ""
	I1222 01:40:09.463943 1685746 logs.go:282] 0 containers: []
	W1222 01:40:09.463952 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:09.463959 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:09.464026 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:09.493604 1685746 cri.go:96] found id: ""
	I1222 01:40:09.493627 1685746 logs.go:282] 0 containers: []
	W1222 01:40:09.493641 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:09.493648 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:09.493707 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:09.529370 1685746 cri.go:96] found id: ""
	I1222 01:40:09.529394 1685746 logs.go:282] 0 containers: []
	W1222 01:40:09.529404 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:09.529411 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:09.529477 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:09.562121 1685746 cri.go:96] found id: ""
	I1222 01:40:09.562150 1685746 logs.go:282] 0 containers: []
	W1222 01:40:09.562160 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:09.562167 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:09.562233 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:09.587896 1685746 cri.go:96] found id: ""
	I1222 01:40:09.587924 1685746 logs.go:282] 0 containers: []
	W1222 01:40:09.587935 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:09.587942 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:09.588010 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:09.613576 1685746 cri.go:96] found id: ""
	I1222 01:40:09.613600 1685746 logs.go:282] 0 containers: []
	W1222 01:40:09.613609 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:09.613619 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:09.613630 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:09.671590 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:09.671627 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:40:09.688438 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:09.688468 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:09.770484 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:09.761524    2979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:09.762731    2979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:09.764535    2979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:09.764916    2979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:09.766502    2979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:09.761524    2979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:09.762731    2979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:09.764535    2979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:09.764916    2979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:09.766502    2979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:09.770797 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:09.770834 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:40:09.803134 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:09.803237 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 01:40:10.247989 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:40:12.248610 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:40:12.334803 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:12.345660 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:12.345780 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:12.375026 1685746 cri.go:96] found id: ""
	I1222 01:40:12.375056 1685746 logs.go:282] 0 containers: []
	W1222 01:40:12.375067 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:12.375075 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:12.375154 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:12.400255 1685746 cri.go:96] found id: ""
	I1222 01:40:12.400282 1685746 logs.go:282] 0 containers: []
	W1222 01:40:12.400291 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:12.400299 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:12.400402 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:12.425430 1685746 cri.go:96] found id: ""
	I1222 01:40:12.425458 1685746 logs.go:282] 0 containers: []
	W1222 01:40:12.425467 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:12.425474 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:12.425535 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:12.450734 1685746 cri.go:96] found id: ""
	I1222 01:40:12.450816 1685746 logs.go:282] 0 containers: []
	W1222 01:40:12.450832 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:12.450841 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:12.450918 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:12.477690 1685746 cri.go:96] found id: ""
	I1222 01:40:12.477719 1685746 logs.go:282] 0 containers: []
	W1222 01:40:12.477735 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:12.477742 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:12.477803 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:12.517751 1685746 cri.go:96] found id: ""
	I1222 01:40:12.517779 1685746 logs.go:282] 0 containers: []
	W1222 01:40:12.517787 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:12.517794 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:12.517858 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:12.544749 1685746 cri.go:96] found id: ""
	I1222 01:40:12.544777 1685746 logs.go:282] 0 containers: []
	W1222 01:40:12.544786 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:12.544793 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:12.544858 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:12.576758 1685746 cri.go:96] found id: ""
	I1222 01:40:12.576786 1685746 logs.go:282] 0 containers: []
	W1222 01:40:12.576795 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:12.576805 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:12.576816 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:40:12.592450 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:12.592478 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:12.658073 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:12.649657    3090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:12.650391    3090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:12.652114    3090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:12.652710    3090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:12.654319    3090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:12.649657    3090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:12.650391    3090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:12.652114    3090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:12.652710    3090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:12.654319    3090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:12.658125 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:12.658138 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:40:12.683599 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:12.683637 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:40:12.715675 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:12.715707 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:15.275108 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:15.285651 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:15.285724 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:15.311249 1685746 cri.go:96] found id: ""
	I1222 01:40:15.311277 1685746 logs.go:282] 0 containers: []
	W1222 01:40:15.311287 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:15.311293 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:15.311353 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:15.336192 1685746 cri.go:96] found id: ""
	I1222 01:40:15.336218 1685746 logs.go:282] 0 containers: []
	W1222 01:40:15.336226 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:15.336234 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:15.336297 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:15.362231 1685746 cri.go:96] found id: ""
	I1222 01:40:15.362254 1685746 logs.go:282] 0 containers: []
	W1222 01:40:15.362263 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:15.362269 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:15.362331 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:15.390149 1685746 cri.go:96] found id: ""
	I1222 01:40:15.390176 1685746 logs.go:282] 0 containers: []
	W1222 01:40:15.390185 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:15.390192 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:15.390259 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:15.417421 1685746 cri.go:96] found id: ""
	I1222 01:40:15.417446 1685746 logs.go:282] 0 containers: []
	W1222 01:40:15.417456 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:15.417464 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:15.417530 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:15.444318 1685746 cri.go:96] found id: ""
	I1222 01:40:15.444346 1685746 logs.go:282] 0 containers: []
	W1222 01:40:15.444356 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:15.444368 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:15.444428 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:15.469475 1685746 cri.go:96] found id: ""
	I1222 01:40:15.469503 1685746 logs.go:282] 0 containers: []
	W1222 01:40:15.469512 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:15.469520 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:15.469581 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:15.501561 1685746 cri.go:96] found id: ""
	I1222 01:40:15.501588 1685746 logs.go:282] 0 containers: []
	W1222 01:40:15.501597 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:15.501606 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:15.501637 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:40:15.518032 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:15.518062 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:15.588024 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:15.580432    3204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:15.581037    3204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:15.582843    3204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:15.583305    3204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:15.584373    3204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:15.580432    3204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:15.581037    3204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:15.582843    3204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:15.583305    3204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:15.584373    3204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:15.588049 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:15.588062 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:40:15.613914 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:15.613953 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:40:15.645712 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:15.645739 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1222 01:40:14.747949 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:40:16.749012 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:40:18.200926 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:18.211578 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:18.211651 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:18.237396 1685746 cri.go:96] found id: ""
	I1222 01:40:18.237421 1685746 logs.go:282] 0 containers: []
	W1222 01:40:18.237429 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:18.237436 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:18.237503 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:18.264313 1685746 cri.go:96] found id: ""
	I1222 01:40:18.264345 1685746 logs.go:282] 0 containers: []
	W1222 01:40:18.264356 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:18.264369 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:18.264451 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:18.290240 1685746 cri.go:96] found id: ""
	I1222 01:40:18.290265 1685746 logs.go:282] 0 containers: []
	W1222 01:40:18.290274 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:18.290281 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:18.290340 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:18.315874 1685746 cri.go:96] found id: ""
	I1222 01:40:18.315898 1685746 logs.go:282] 0 containers: []
	W1222 01:40:18.315907 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:18.315914 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:18.315975 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:18.340813 1685746 cri.go:96] found id: ""
	I1222 01:40:18.340836 1685746 logs.go:282] 0 containers: []
	W1222 01:40:18.340844 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:18.340852 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:18.340912 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:18.368094 1685746 cri.go:96] found id: ""
	I1222 01:40:18.368119 1685746 logs.go:282] 0 containers: []
	W1222 01:40:18.368128 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:18.368135 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:18.368251 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:18.393525 1685746 cri.go:96] found id: ""
	I1222 01:40:18.393551 1685746 logs.go:282] 0 containers: []
	W1222 01:40:18.393559 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:18.393566 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:18.393629 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:18.419984 1685746 cri.go:96] found id: ""
	I1222 01:40:18.420011 1685746 logs.go:282] 0 containers: []
	W1222 01:40:18.420020 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:18.420031 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:18.420043 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:40:18.435061 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:18.435090 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:18.511216 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:18.502272    3309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:18.503063    3309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:18.504790    3309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:18.505424    3309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:18.507123    3309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:18.502272    3309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:18.503063    3309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:18.504790    3309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:18.505424    3309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:18.507123    3309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:18.511242 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:18.511258 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:40:18.539215 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:18.539253 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:40:18.571721 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:18.571752 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:21.133335 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:21.144470 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:21.144552 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:21.170402 1685746 cri.go:96] found id: ""
	I1222 01:40:21.170435 1685746 logs.go:282] 0 containers: []
	W1222 01:40:21.170444 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:21.170451 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:21.170514 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:21.197647 1685746 cri.go:96] found id: ""
	I1222 01:40:21.197674 1685746 logs.go:282] 0 containers: []
	W1222 01:40:21.197683 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:21.197690 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:21.197754 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:21.231085 1685746 cri.go:96] found id: ""
	I1222 01:40:21.231120 1685746 logs.go:282] 0 containers: []
	W1222 01:40:21.231130 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:21.231137 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:21.231243 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:21.268085 1685746 cri.go:96] found id: ""
	I1222 01:40:21.268112 1685746 logs.go:282] 0 containers: []
	W1222 01:40:21.268121 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:21.268129 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:21.268195 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:21.293752 1685746 cri.go:96] found id: ""
	I1222 01:40:21.293781 1685746 logs.go:282] 0 containers: []
	W1222 01:40:21.293791 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:21.293797 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:21.293864 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:21.320171 1685746 cri.go:96] found id: ""
	I1222 01:40:21.320195 1685746 logs.go:282] 0 containers: []
	W1222 01:40:21.320203 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:21.320210 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:21.320273 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:21.346069 1685746 cri.go:96] found id: ""
	I1222 01:40:21.346162 1685746 logs.go:282] 0 containers: []
	W1222 01:40:21.346177 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:21.346185 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:21.346246 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:21.371416 1685746 cri.go:96] found id: ""
	I1222 01:40:21.371443 1685746 logs.go:282] 0 containers: []
	W1222 01:40:21.371452 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:21.371462 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:21.371475 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:40:21.404674 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:21.404703 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:21.460348 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:21.460388 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:40:21.475958 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:21.475994 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:21.561495 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:21.552344    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:21.553051    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:21.555518    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:21.555905    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:21.557453    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:21.552344    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:21.553051    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:21.555518    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:21.555905    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:21.557453    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:21.561520 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:21.561533 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1222 01:40:19.248000 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:40:21.248090 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:40:24.089244 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:24.100814 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:24.100889 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:24.126847 1685746 cri.go:96] found id: ""
	I1222 01:40:24.126878 1685746 logs.go:282] 0 containers: []
	W1222 01:40:24.126888 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:24.126895 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:24.126959 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:24.152740 1685746 cri.go:96] found id: ""
	I1222 01:40:24.152768 1685746 logs.go:282] 0 containers: []
	W1222 01:40:24.152778 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:24.152784 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:24.152845 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:24.178506 1685746 cri.go:96] found id: ""
	I1222 01:40:24.178532 1685746 logs.go:282] 0 containers: []
	W1222 01:40:24.178540 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:24.178547 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:24.178628 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:24.210111 1685746 cri.go:96] found id: ""
	I1222 01:40:24.210138 1685746 logs.go:282] 0 containers: []
	W1222 01:40:24.210147 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:24.210156 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:24.210219 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:24.234336 1685746 cri.go:96] found id: ""
	I1222 01:40:24.234358 1685746 logs.go:282] 0 containers: []
	W1222 01:40:24.234372 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:24.234379 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:24.234440 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:24.259792 1685746 cri.go:96] found id: ""
	I1222 01:40:24.259861 1685746 logs.go:282] 0 containers: []
	W1222 01:40:24.259884 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:24.259898 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:24.259973 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:24.285594 1685746 cri.go:96] found id: ""
	I1222 01:40:24.285623 1685746 logs.go:282] 0 containers: []
	W1222 01:40:24.285632 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:24.285639 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:24.285722 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:24.312027 1685746 cri.go:96] found id: ""
	I1222 01:40:24.312055 1685746 logs.go:282] 0 containers: []
	W1222 01:40:24.312064 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:24.312074 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:24.312088 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:40:24.345845 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:24.345873 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:24.404101 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:24.404140 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:40:24.419436 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:24.419465 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:24.485147 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:24.477123    3551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:24.477548    3551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:24.479053    3551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:24.479387    3551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:24.480817    3551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:24.477123    3551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:24.477548    3551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:24.479053    3551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:24.479387    3551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:24.480817    3551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:24.485182 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:24.485195 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:40:25.275578 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:40:25.338578 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:40:25.338685 1685746 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1222 01:40:23.248426 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:40:25.748112 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:40:27.748979 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:40:27.016338 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:27.030615 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:27.030685 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:27.060751 1685746 cri.go:96] found id: ""
	I1222 01:40:27.060775 1685746 logs.go:282] 0 containers: []
	W1222 01:40:27.060784 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:27.060791 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:27.060850 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:27.088784 1685746 cri.go:96] found id: ""
	I1222 01:40:27.088807 1685746 logs.go:282] 0 containers: []
	W1222 01:40:27.088816 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:27.088822 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:27.088889 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:27.115559 1685746 cri.go:96] found id: ""
	I1222 01:40:27.115581 1685746 logs.go:282] 0 containers: []
	W1222 01:40:27.115590 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:27.115596 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:27.115658 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:27.141509 1685746 cri.go:96] found id: ""
	I1222 01:40:27.141579 1685746 logs.go:282] 0 containers: []
	W1222 01:40:27.141602 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:27.141624 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:27.141712 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:27.168944 1685746 cri.go:96] found id: ""
	I1222 01:40:27.168984 1685746 logs.go:282] 0 containers: []
	W1222 01:40:27.168993 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:27.169006 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:27.169076 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:27.194554 1685746 cri.go:96] found id: ""
	I1222 01:40:27.194584 1685746 logs.go:282] 0 containers: []
	W1222 01:40:27.194593 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:27.194599 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:27.194662 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:27.219603 1685746 cri.go:96] found id: ""
	I1222 01:40:27.219684 1685746 logs.go:282] 0 containers: []
	W1222 01:40:27.219707 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:27.219721 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:27.219801 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:27.246999 1685746 cri.go:96] found id: ""
	I1222 01:40:27.247033 1685746 logs.go:282] 0 containers: []
	W1222 01:40:27.247042 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:27.247067 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:27.247087 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:27.302977 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:27.303012 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:40:27.318364 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:27.318398 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:27.385339 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:27.376631    3659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:27.377224    3659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:27.379740    3659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:27.380579    3659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:27.381699    3659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:27.376631    3659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:27.377224    3659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:27.379740    3659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:27.380579    3659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:27.381699    3659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:27.385413 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:27.385442 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:40:27.411346 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:27.411384 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:40:29.941731 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:29.955808 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:29.955883 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:29.982684 1685746 cri.go:96] found id: ""
	I1222 01:40:29.982709 1685746 logs.go:282] 0 containers: []
	W1222 01:40:29.982718 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:29.982725 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:29.982796 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:30.036793 1685746 cri.go:96] found id: ""
	I1222 01:40:30.036836 1685746 logs.go:282] 0 containers: []
	W1222 01:40:30.036847 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:30.036858 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:30.036986 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:30.127706 1685746 cri.go:96] found id: ""
	I1222 01:40:30.127740 1685746 logs.go:282] 0 containers: []
	W1222 01:40:30.127750 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:30.127757 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:30.127828 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:30.158476 1685746 cri.go:96] found id: ""
	I1222 01:40:30.158509 1685746 logs.go:282] 0 containers: []
	W1222 01:40:30.158521 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:30.158529 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:30.158598 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:30.187425 1685746 cri.go:96] found id: ""
	I1222 01:40:30.187453 1685746 logs.go:282] 0 containers: []
	W1222 01:40:30.187463 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:30.187470 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:30.187539 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:30.216013 1685746 cri.go:96] found id: ""
	I1222 01:40:30.216043 1685746 logs.go:282] 0 containers: []
	W1222 01:40:30.216052 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:30.216060 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:30.216125 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:30.241947 1685746 cri.go:96] found id: ""
	I1222 01:40:30.241975 1685746 logs.go:282] 0 containers: []
	W1222 01:40:30.241985 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:30.241991 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:30.242074 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:30.271569 1685746 cri.go:96] found id: ""
	I1222 01:40:30.271595 1685746 logs.go:282] 0 containers: []
	W1222 01:40:30.271603 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:30.271613 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:30.271625 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:30.327858 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:30.327896 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:40:30.343479 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:30.343505 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:30.411657 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:30.402221    3771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:30.402895    3771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:30.404480    3771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:30.404791    3771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:30.407032    3771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:30.402221    3771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:30.402895    3771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:30.404480    3771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:30.404791    3771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:30.407032    3771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:30.411678 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:30.411692 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:40:30.436851 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:30.436886 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:40:30.511390 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:40:30.582457 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:40:30.582560 1685746 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1222 01:40:30.587532 1685746 out.go:179] * Enabled addons: 
	I1222 01:40:30.590426 1685746 addons.go:530] duration metric: took 1m51.812167431s for enable addons: enabled=[]
	W1222 01:40:30.247997 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:40:32.248097 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:40:32.969406 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:32.980360 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:32.980444 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:33.016753 1685746 cri.go:96] found id: ""
	I1222 01:40:33.016778 1685746 logs.go:282] 0 containers: []
	W1222 01:40:33.016787 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:33.016795 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:33.016881 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:33.053288 1685746 cri.go:96] found id: ""
	I1222 01:40:33.053315 1685746 logs.go:282] 0 containers: []
	W1222 01:40:33.053334 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:33.053358 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:33.053457 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:33.087392 1685746 cri.go:96] found id: ""
	I1222 01:40:33.087417 1685746 logs.go:282] 0 containers: []
	W1222 01:40:33.087426 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:33.087432 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:33.087492 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:33.113564 1685746 cri.go:96] found id: ""
	I1222 01:40:33.113595 1685746 logs.go:282] 0 containers: []
	W1222 01:40:33.113604 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:33.113611 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:33.113698 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:33.143733 1685746 cri.go:96] found id: ""
	I1222 01:40:33.143757 1685746 logs.go:282] 0 containers: []
	W1222 01:40:33.143766 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:33.143772 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:33.143835 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:33.169776 1685746 cri.go:96] found id: ""
	I1222 01:40:33.169808 1685746 logs.go:282] 0 containers: []
	W1222 01:40:33.169816 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:33.169824 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:33.169887 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:33.198413 1685746 cri.go:96] found id: ""
	I1222 01:40:33.198438 1685746 logs.go:282] 0 containers: []
	W1222 01:40:33.198446 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:33.198453 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:33.198514 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:33.223746 1685746 cri.go:96] found id: ""
	I1222 01:40:33.223816 1685746 logs.go:282] 0 containers: []
	W1222 01:40:33.223838 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:33.223855 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:33.223866 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:40:33.249217 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:33.249247 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:40:33.282243 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:33.282269 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:33.340677 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:33.340714 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:40:33.355635 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:33.355667 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:33.438690 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:33.431030    3900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:33.431616    3900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:33.433134    3900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:33.433634    3900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:33.435107    3900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:33.431030    3900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:33.431616    3900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:33.433134    3900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:33.433634    3900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:33.435107    3900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:35.940454 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:35.954241 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:35.954312 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:35.979549 1685746 cri.go:96] found id: ""
	I1222 01:40:35.979576 1685746 logs.go:282] 0 containers: []
	W1222 01:40:35.979585 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:35.979592 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:35.979654 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:36.010177 1685746 cri.go:96] found id: ""
	I1222 01:40:36.010207 1685746 logs.go:282] 0 containers: []
	W1222 01:40:36.010217 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:36.010224 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:36.010295 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:36.045048 1685746 cri.go:96] found id: ""
	I1222 01:40:36.045078 1685746 logs.go:282] 0 containers: []
	W1222 01:40:36.045088 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:36.045095 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:36.045157 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:36.074866 1685746 cri.go:96] found id: ""
	I1222 01:40:36.074889 1685746 logs.go:282] 0 containers: []
	W1222 01:40:36.074897 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:36.074903 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:36.074965 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:36.101425 1685746 cri.go:96] found id: ""
	I1222 01:40:36.101499 1685746 logs.go:282] 0 containers: []
	W1222 01:40:36.101511 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:36.101518 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:36.106750 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:36.134167 1685746 cri.go:96] found id: ""
	I1222 01:40:36.134205 1685746 logs.go:282] 0 containers: []
	W1222 01:40:36.134215 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:36.134223 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:36.134288 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:36.159767 1685746 cri.go:96] found id: ""
	I1222 01:40:36.159792 1685746 logs.go:282] 0 containers: []
	W1222 01:40:36.159802 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:36.159809 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:36.159873 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:36.188878 1685746 cri.go:96] found id: ""
	I1222 01:40:36.188907 1685746 logs.go:282] 0 containers: []
	W1222 01:40:36.188917 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:36.188928 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:36.188941 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:36.253797 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:36.245184    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:36.246059    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:36.247790    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:36.248134    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:36.249657    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:36.245184    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:36.246059    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:36.247790    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:36.248134    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:36.249657    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:36.253877 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:36.253906 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:40:36.279371 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:36.279408 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:40:36.308866 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:36.308901 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:36.365568 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:36.365603 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1222 01:40:34.248867 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:40:36.748755 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:40:38.881766 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:38.892862 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:38.892944 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:38.919366 1685746 cri.go:96] found id: ""
	I1222 01:40:38.919399 1685746 logs.go:282] 0 containers: []
	W1222 01:40:38.919409 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:38.919421 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:38.919495 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:38.953015 1685746 cri.go:96] found id: ""
	I1222 01:40:38.953042 1685746 logs.go:282] 0 containers: []
	W1222 01:40:38.953051 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:38.953058 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:38.953121 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:38.979133 1685746 cri.go:96] found id: ""
	I1222 01:40:38.979158 1685746 logs.go:282] 0 containers: []
	W1222 01:40:38.979167 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:38.979173 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:38.979236 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:39.017688 1685746 cri.go:96] found id: ""
	I1222 01:40:39.017714 1685746 logs.go:282] 0 containers: []
	W1222 01:40:39.017724 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:39.017735 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:39.017797 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:39.056591 1685746 cri.go:96] found id: ""
	I1222 01:40:39.056614 1685746 logs.go:282] 0 containers: []
	W1222 01:40:39.056622 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:39.056629 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:39.056686 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:39.085085 1685746 cri.go:96] found id: ""
	I1222 01:40:39.085155 1685746 logs.go:282] 0 containers: []
	W1222 01:40:39.085177 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:39.085199 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:39.085296 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:39.114614 1685746 cri.go:96] found id: ""
	I1222 01:40:39.114640 1685746 logs.go:282] 0 containers: []
	W1222 01:40:39.114649 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:39.114656 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:39.114738 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:39.140466 1685746 cri.go:96] found id: ""
	I1222 01:40:39.140511 1685746 logs.go:282] 0 containers: []
	W1222 01:40:39.140520 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:39.140545 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:39.140564 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:39.208956 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:39.200655    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:39.201282    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:39.202774    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:39.203325    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:39.204797    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:39.200655    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:39.201282    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:39.202774    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:39.203325    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:39.204797    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:39.208979 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:39.208992 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:40:39.234396 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:39.234430 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:40:39.264983 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:39.265011 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:39.320138 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:39.320173 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1222 01:40:38.748943 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:40:41.248791 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:40:41.835978 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:41.846958 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:41.847061 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:41.872281 1685746 cri.go:96] found id: ""
	I1222 01:40:41.872307 1685746 logs.go:282] 0 containers: []
	W1222 01:40:41.872318 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:41.872324 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:41.872429 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:41.902068 1685746 cri.go:96] found id: ""
	I1222 01:40:41.902127 1685746 logs.go:282] 0 containers: []
	W1222 01:40:41.902137 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:41.902163 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:41.902275 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:41.936505 1685746 cri.go:96] found id: ""
	I1222 01:40:41.936535 1685746 logs.go:282] 0 containers: []
	W1222 01:40:41.936544 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:41.936550 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:41.936615 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:41.961446 1685746 cri.go:96] found id: ""
	I1222 01:40:41.961480 1685746 logs.go:282] 0 containers: []
	W1222 01:40:41.961489 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:41.961496 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:41.961569 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:41.989500 1685746 cri.go:96] found id: ""
	I1222 01:40:41.989582 1685746 logs.go:282] 0 containers: []
	W1222 01:40:41.989606 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:41.989631 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:41.989730 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:42.028918 1685746 cri.go:96] found id: ""
	I1222 01:40:42.028947 1685746 logs.go:282] 0 containers: []
	W1222 01:40:42.028956 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:42.028963 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:42.029037 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:42.065570 1685746 cri.go:96] found id: ""
	I1222 01:40:42.065618 1685746 logs.go:282] 0 containers: []
	W1222 01:40:42.065633 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:42.065641 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:42.065724 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:42.095634 1685746 cri.go:96] found id: ""
	I1222 01:40:42.095661 1685746 logs.go:282] 0 containers: []
	W1222 01:40:42.095671 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:42.095681 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:42.095702 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:42.158126 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:42.158170 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:40:42.175600 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:42.175640 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:42.256856 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:42.248823    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:42.249305    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:42.250890    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:42.251275    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:42.252835    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:42.248823    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:42.249305    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:42.250890    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:42.251275    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:42.252835    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:42.256882 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:42.256896 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:40:42.283618 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:42.283665 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:40:44.813189 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:44.824766 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:44.824836 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:44.853167 1685746 cri.go:96] found id: ""
	I1222 01:40:44.853192 1685746 logs.go:282] 0 containers: []
	W1222 01:40:44.853201 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:44.853208 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:44.853269 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:44.878679 1685746 cri.go:96] found id: ""
	I1222 01:40:44.878711 1685746 logs.go:282] 0 containers: []
	W1222 01:40:44.878721 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:44.878728 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:44.878792 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:44.905070 1685746 cri.go:96] found id: ""
	I1222 01:40:44.905097 1685746 logs.go:282] 0 containers: []
	W1222 01:40:44.905106 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:44.905113 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:44.905177 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:44.930494 1685746 cri.go:96] found id: ""
	I1222 01:40:44.930523 1685746 logs.go:282] 0 containers: []
	W1222 01:40:44.930533 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:44.930539 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:44.930599 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:44.960159 1685746 cri.go:96] found id: ""
	I1222 01:40:44.960187 1685746 logs.go:282] 0 containers: []
	W1222 01:40:44.960196 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:44.960203 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:44.960308 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:44.985038 1685746 cri.go:96] found id: ""
	I1222 01:40:44.985066 1685746 logs.go:282] 0 containers: []
	W1222 01:40:44.985076 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:44.985083 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:44.985147 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:45.046474 1685746 cri.go:96] found id: ""
	I1222 01:40:45.046501 1685746 logs.go:282] 0 containers: []
	W1222 01:40:45.046511 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:45.046518 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:45.046590 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:45.111231 1685746 cri.go:96] found id: ""
	I1222 01:40:45.111266 1685746 logs.go:282] 0 containers: []
	W1222 01:40:45.111275 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:45.111286 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:45.111299 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:45.180293 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:45.180418 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:40:45.231743 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:45.231786 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:45.318004 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:45.307287    4338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:45.308090    4338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:45.309847    4338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:45.310539    4338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:45.312128    4338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:45.307287    4338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:45.308090    4338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:45.309847    4338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:45.310539    4338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:45.312128    4338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:45.318031 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:45.318045 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:40:45.351434 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:45.351474 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 01:40:43.748820 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:40:45.748974 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:40:47.885492 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:47.896303 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:47.896380 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:47.927221 1685746 cri.go:96] found id: ""
	I1222 01:40:47.927247 1685746 logs.go:282] 0 containers: []
	W1222 01:40:47.927257 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:47.927264 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:47.927326 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:47.955055 1685746 cri.go:96] found id: ""
	I1222 01:40:47.955082 1685746 logs.go:282] 0 containers: []
	W1222 01:40:47.955091 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:47.955098 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:47.955167 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:47.982730 1685746 cri.go:96] found id: ""
	I1222 01:40:47.982760 1685746 logs.go:282] 0 containers: []
	W1222 01:40:47.982770 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:47.982777 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:47.982841 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:48.013060 1685746 cri.go:96] found id: ""
	I1222 01:40:48.013093 1685746 logs.go:282] 0 containers: []
	W1222 01:40:48.013104 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:48.013111 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:48.013184 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:48.044824 1685746 cri.go:96] found id: ""
	I1222 01:40:48.044902 1685746 logs.go:282] 0 containers: []
	W1222 01:40:48.044918 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:48.044926 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:48.044994 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:48.077777 1685746 cri.go:96] found id: ""
	I1222 01:40:48.077806 1685746 logs.go:282] 0 containers: []
	W1222 01:40:48.077816 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:48.077822 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:48.077887 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:48.108631 1685746 cri.go:96] found id: ""
	I1222 01:40:48.108659 1685746 logs.go:282] 0 containers: []
	W1222 01:40:48.108669 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:48.108676 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:48.108767 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:48.135002 1685746 cri.go:96] found id: ""
	I1222 01:40:48.135035 1685746 logs.go:282] 0 containers: []
	W1222 01:40:48.135045 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:48.135056 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:48.135092 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:48.192262 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:48.192299 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:40:48.207972 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:48.208074 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:48.295537 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:48.286431    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:48.287217    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:48.288971    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:48.289714    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:48.290868    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:48.286431    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:48.287217    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:48.288971    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:48.289714    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:48.290868    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:48.295563 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:48.295583 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:40:48.322629 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:48.322665 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:40:50.857236 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:50.868315 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:50.868396 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:50.894289 1685746 cri.go:96] found id: ""
	I1222 01:40:50.894337 1685746 logs.go:282] 0 containers: []
	W1222 01:40:50.894346 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:50.894353 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:50.894414 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:50.920265 1685746 cri.go:96] found id: ""
	I1222 01:40:50.920288 1685746 logs.go:282] 0 containers: []
	W1222 01:40:50.920297 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:50.920303 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:50.920362 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:50.946413 1685746 cri.go:96] found id: ""
	I1222 01:40:50.946437 1685746 logs.go:282] 0 containers: []
	W1222 01:40:50.946445 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:50.946452 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:50.946511 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:50.973167 1685746 cri.go:96] found id: ""
	I1222 01:40:50.973192 1685746 logs.go:282] 0 containers: []
	W1222 01:40:50.973202 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:50.973209 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:50.973278 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:50.998695 1685746 cri.go:96] found id: ""
	I1222 01:40:50.998730 1685746 logs.go:282] 0 containers: []
	W1222 01:40:50.998739 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:50.998746 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:50.998812 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:51.027679 1685746 cri.go:96] found id: ""
	I1222 01:40:51.027748 1685746 logs.go:282] 0 containers: []
	W1222 01:40:51.027770 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:51.027792 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:51.027882 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:51.057709 1685746 cri.go:96] found id: ""
	I1222 01:40:51.057791 1685746 logs.go:282] 0 containers: []
	W1222 01:40:51.057816 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:51.057839 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:51.057933 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:51.085239 1685746 cri.go:96] found id: ""
	I1222 01:40:51.085311 1685746 logs.go:282] 0 containers: []
	W1222 01:40:51.085335 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:51.085361 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:51.085402 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:51.143088 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:51.143131 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:40:51.159838 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:51.159866 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:51.229894 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:51.221124    4573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:51.221892    4573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:51.223547    4573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:51.224109    4573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:51.225453    4573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:51.221124    4573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:51.221892    4573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:51.223547    4573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:51.224109    4573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:51.225453    4573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:51.229917 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:51.229932 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:40:51.258211 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:51.258321 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 01:40:48.248802 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:40:50.748310 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:40:53.799763 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:53.811321 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:53.811400 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:53.838808 1685746 cri.go:96] found id: ""
	I1222 01:40:53.838834 1685746 logs.go:282] 0 containers: []
	W1222 01:40:53.838844 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:53.838851 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:53.838918 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:53.865906 1685746 cri.go:96] found id: ""
	I1222 01:40:53.865930 1685746 logs.go:282] 0 containers: []
	W1222 01:40:53.865938 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:53.865945 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:53.866008 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:53.891986 1685746 cri.go:96] found id: ""
	I1222 01:40:53.892030 1685746 logs.go:282] 0 containers: []
	W1222 01:40:53.892040 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:53.892047 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:53.892120 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:53.918633 1685746 cri.go:96] found id: ""
	I1222 01:40:53.918660 1685746 logs.go:282] 0 containers: []
	W1222 01:40:53.918670 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:53.918677 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:53.918748 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:53.945224 1685746 cri.go:96] found id: ""
	I1222 01:40:53.945259 1685746 logs.go:282] 0 containers: []
	W1222 01:40:53.945268 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:53.945274 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:53.945345 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:53.976181 1685746 cri.go:96] found id: ""
	I1222 01:40:53.976207 1685746 logs.go:282] 0 containers: []
	W1222 01:40:53.976216 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:53.976223 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:53.976286 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:54.017529 1685746 cri.go:96] found id: ""
	I1222 01:40:54.017609 1685746 logs.go:282] 0 containers: []
	W1222 01:40:54.017633 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:54.017657 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:54.017766 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:54.050157 1685746 cri.go:96] found id: ""
	I1222 01:40:54.050234 1685746 logs.go:282] 0 containers: []
	W1222 01:40:54.050257 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:54.050284 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:54.050322 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:54.107873 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:54.107911 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:40:54.123115 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:54.123192 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:54.189938 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:54.180462    4689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:54.181036    4689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:54.182810    4689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:54.183522    4689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:54.185321    4689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:54.180462    4689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:54.181036    4689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:54.182810    4689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:54.183522    4689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:54.185321    4689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:54.189963 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:54.189976 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:40:54.216904 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:54.216959 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 01:40:53.248434 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:40:55.748007 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:40:57.748191 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:40:56.757953 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:56.769647 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:56.769793 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:56.802913 1685746 cri.go:96] found id: ""
	I1222 01:40:56.802941 1685746 logs.go:282] 0 containers: []
	W1222 01:40:56.802951 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:56.802958 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:56.803018 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:56.828625 1685746 cri.go:96] found id: ""
	I1222 01:40:56.828654 1685746 logs.go:282] 0 containers: []
	W1222 01:40:56.828664 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:56.828671 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:56.828734 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:56.853350 1685746 cri.go:96] found id: ""
	I1222 01:40:56.853378 1685746 logs.go:282] 0 containers: []
	W1222 01:40:56.853388 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:56.853394 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:56.853456 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:56.883418 1685746 cri.go:96] found id: ""
	I1222 01:40:56.883443 1685746 logs.go:282] 0 containers: []
	W1222 01:40:56.883458 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:56.883466 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:56.883532 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:56.912769 1685746 cri.go:96] found id: ""
	I1222 01:40:56.912799 1685746 logs.go:282] 0 containers: []
	W1222 01:40:56.912809 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:56.912817 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:56.912880 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:56.938494 1685746 cri.go:96] found id: ""
	I1222 01:40:56.938519 1685746 logs.go:282] 0 containers: []
	W1222 01:40:56.938529 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:56.938536 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:56.938602 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:56.968944 1685746 cri.go:96] found id: ""
	I1222 01:40:56.968978 1685746 logs.go:282] 0 containers: []
	W1222 01:40:56.968987 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:56.968994 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:56.969063 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:56.995238 1685746 cri.go:96] found id: ""
	I1222 01:40:56.995265 1685746 logs.go:282] 0 containers: []
	W1222 01:40:56.995274 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:56.995284 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:56.995295 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:40:57.022601 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:57.022641 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:40:57.055915 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:57.055993 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:57.110958 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:57.110993 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:40:57.126557 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:57.126587 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:57.199192 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:57.188757    4814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:57.189807    4814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:57.191598    4814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:57.192842    4814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:57.194605    4814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:57.188757    4814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:57.189807    4814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:57.191598    4814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:57.192842    4814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:57.194605    4814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:59.699460 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:59.709928 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:59.709999 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:59.734831 1685746 cri.go:96] found id: ""
	I1222 01:40:59.734861 1685746 logs.go:282] 0 containers: []
	W1222 01:40:59.734870 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:59.734876 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:59.734939 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:59.766737 1685746 cri.go:96] found id: ""
	I1222 01:40:59.766765 1685746 logs.go:282] 0 containers: []
	W1222 01:40:59.766773 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:59.766785 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:59.766845 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:59.800714 1685746 cri.go:96] found id: ""
	I1222 01:40:59.800742 1685746 logs.go:282] 0 containers: []
	W1222 01:40:59.800751 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:59.800757 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:59.800817 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:59.828842 1685746 cri.go:96] found id: ""
	I1222 01:40:59.828871 1685746 logs.go:282] 0 containers: []
	W1222 01:40:59.828880 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:59.828888 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:59.828951 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:59.854824 1685746 cri.go:96] found id: ""
	I1222 01:40:59.854848 1685746 logs.go:282] 0 containers: []
	W1222 01:40:59.854857 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:59.854864 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:59.854928 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:59.879691 1685746 cri.go:96] found id: ""
	I1222 01:40:59.879761 1685746 logs.go:282] 0 containers: []
	W1222 01:40:59.879784 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:59.879798 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:59.879874 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:59.905099 1685746 cri.go:96] found id: ""
	I1222 01:40:59.905136 1685746 logs.go:282] 0 containers: []
	W1222 01:40:59.905146 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:59.905152 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:59.905232 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:59.929727 1685746 cri.go:96] found id: ""
	I1222 01:40:59.929763 1685746 logs.go:282] 0 containers: []
	W1222 01:40:59.929775 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:59.929784 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:59.929794 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:59.985430 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:59.985466 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:00.001212 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:00.001238 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:00.267041 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:00.255488    4915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:00.258122    4915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:00.259215    4915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:00.260139    4915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:00.261094    4915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:00.255488    4915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:00.258122    4915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:00.259215    4915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:00.260139    4915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:00.261094    4915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:00.267072 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:00.267085 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:00.299707 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:00.299756 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 01:41:00.248610 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:41:02.248653 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:41:02.866175 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:02.877065 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:02.877139 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:02.902030 1685746 cri.go:96] found id: ""
	I1222 01:41:02.902137 1685746 logs.go:282] 0 containers: []
	W1222 01:41:02.902161 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:02.902183 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:02.902277 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:02.928023 1685746 cri.go:96] found id: ""
	I1222 01:41:02.928048 1685746 logs.go:282] 0 containers: []
	W1222 01:41:02.928058 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:02.928065 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:02.928128 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:02.958559 1685746 cri.go:96] found id: ""
	I1222 01:41:02.958595 1685746 logs.go:282] 0 containers: []
	W1222 01:41:02.958605 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:02.958612 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:02.958675 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:02.984249 1685746 cri.go:96] found id: ""
	I1222 01:41:02.984272 1685746 logs.go:282] 0 containers: []
	W1222 01:41:02.984281 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:02.984287 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:02.984355 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:03.033125 1685746 cri.go:96] found id: ""
	I1222 01:41:03.033152 1685746 logs.go:282] 0 containers: []
	W1222 01:41:03.033161 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:03.033167 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:03.033228 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:03.058557 1685746 cri.go:96] found id: ""
	I1222 01:41:03.058583 1685746 logs.go:282] 0 containers: []
	W1222 01:41:03.058591 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:03.058598 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:03.058657 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:03.089068 1685746 cri.go:96] found id: ""
	I1222 01:41:03.089112 1685746 logs.go:282] 0 containers: []
	W1222 01:41:03.089122 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:03.089132 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:03.089210 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:03.119177 1685746 cri.go:96] found id: ""
	I1222 01:41:03.119201 1685746 logs.go:282] 0 containers: []
	W1222 01:41:03.119210 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:03.119220 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:03.119231 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:03.182970 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:03.173888    5021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:03.174799    5021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:03.176762    5021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:03.177121    5021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:03.178608    5021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:03.173888    5021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:03.174799    5021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:03.176762    5021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:03.177121    5021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:03.178608    5021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:03.183000 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:03.183013 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:03.207694 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:03.207726 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:41:03.238481 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:03.238559 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:03.311496 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:03.311531 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:05.829656 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:05.840301 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:05.840394 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:05.867057 1685746 cri.go:96] found id: ""
	I1222 01:41:05.867080 1685746 logs.go:282] 0 containers: []
	W1222 01:41:05.867089 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:05.867095 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:05.867155 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:05.897184 1685746 cri.go:96] found id: ""
	I1222 01:41:05.897206 1685746 logs.go:282] 0 containers: []
	W1222 01:41:05.897215 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:05.897221 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:05.897284 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:05.922902 1685746 cri.go:96] found id: ""
	I1222 01:41:05.922924 1685746 logs.go:282] 0 containers: []
	W1222 01:41:05.922933 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:05.922940 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:05.923001 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:05.947567 1685746 cri.go:96] found id: ""
	I1222 01:41:05.947591 1685746 logs.go:282] 0 containers: []
	W1222 01:41:05.947600 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:05.947606 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:05.947725 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:05.973767 1685746 cri.go:96] found id: ""
	I1222 01:41:05.973795 1685746 logs.go:282] 0 containers: []
	W1222 01:41:05.973803 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:05.973810 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:05.973870 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:05.999045 1685746 cri.go:96] found id: ""
	I1222 01:41:05.999075 1685746 logs.go:282] 0 containers: []
	W1222 01:41:05.999084 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:05.999090 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:05.999156 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:06.037292 1685746 cri.go:96] found id: ""
	I1222 01:41:06.037323 1685746 logs.go:282] 0 containers: []
	W1222 01:41:06.037331 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:06.037338 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:06.037403 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:06.063105 1685746 cri.go:96] found id: ""
	I1222 01:41:06.063136 1685746 logs.go:282] 0 containers: []
	W1222 01:41:06.063145 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:06.063155 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:06.063166 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:06.118645 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:06.118682 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:06.134249 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:06.134283 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:06.202948 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:06.194329    5136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:06.194939    5136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:06.196573    5136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:06.197084    5136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:06.198693    5136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:06.194329    5136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:06.194939    5136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:06.196573    5136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:06.197084    5136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:06.198693    5136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:06.202967 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:06.202978 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:06.227736 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:06.227770 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 01:41:04.248851 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:41:06.748841 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:41:08.763766 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:08.776166 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:08.776292 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:08.802744 1685746 cri.go:96] found id: ""
	I1222 01:41:08.802770 1685746 logs.go:282] 0 containers: []
	W1222 01:41:08.802780 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:08.802787 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:08.802897 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:08.829155 1685746 cri.go:96] found id: ""
	I1222 01:41:08.829196 1685746 logs.go:282] 0 containers: []
	W1222 01:41:08.829205 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:08.829212 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:08.829286 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:08.853323 1685746 cri.go:96] found id: ""
	I1222 01:41:08.853358 1685746 logs.go:282] 0 containers: []
	W1222 01:41:08.853368 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:08.853374 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:08.853442 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:08.878843 1685746 cri.go:96] found id: ""
	I1222 01:41:08.878871 1685746 logs.go:282] 0 containers: []
	W1222 01:41:08.878880 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:08.878887 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:08.878948 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:08.907348 1685746 cri.go:96] found id: ""
	I1222 01:41:08.907374 1685746 logs.go:282] 0 containers: []
	W1222 01:41:08.907383 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:08.907390 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:08.907459 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:08.935980 1685746 cri.go:96] found id: ""
	I1222 01:41:08.936006 1685746 logs.go:282] 0 containers: []
	W1222 01:41:08.936015 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:08.936022 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:08.936103 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:08.965110 1685746 cri.go:96] found id: ""
	I1222 01:41:08.965149 1685746 logs.go:282] 0 containers: []
	W1222 01:41:08.965159 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:08.965165 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:08.965240 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:08.991481 1685746 cri.go:96] found id: ""
	I1222 01:41:08.991509 1685746 logs.go:282] 0 containers: []
	W1222 01:41:08.991518 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:08.991527 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:08.991539 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:09.007297 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:09.007330 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:09.077476 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:09.069572    5248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:09.070224    5248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:09.071715    5248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:09.072134    5248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:09.073635    5248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:09.069572    5248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:09.070224    5248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:09.071715    5248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:09.072134    5248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:09.073635    5248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:09.077557 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:09.077597 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:09.102923 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:09.102958 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:41:09.131422 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:09.131450 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1222 01:41:09.248676 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:41:11.748069 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:41:11.686744 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:11.697606 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:11.697689 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:11.722593 1685746 cri.go:96] found id: ""
	I1222 01:41:11.722664 1685746 logs.go:282] 0 containers: []
	W1222 01:41:11.722686 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:11.722701 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:11.722796 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:11.767413 1685746 cri.go:96] found id: ""
	I1222 01:41:11.767439 1685746 logs.go:282] 0 containers: []
	W1222 01:41:11.767448 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:11.767454 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:11.767526 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:11.800344 1685746 cri.go:96] found id: ""
	I1222 01:41:11.800433 1685746 logs.go:282] 0 containers: []
	W1222 01:41:11.800466 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:11.800487 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:11.800594 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:11.836608 1685746 cri.go:96] found id: ""
	I1222 01:41:11.836693 1685746 logs.go:282] 0 containers: []
	W1222 01:41:11.836717 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:11.836755 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:11.836854 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:11.862781 1685746 cri.go:96] found id: ""
	I1222 01:41:11.862808 1685746 logs.go:282] 0 containers: []
	W1222 01:41:11.862818 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:11.862830 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:11.862894 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:11.891376 1685746 cri.go:96] found id: ""
	I1222 01:41:11.891401 1685746 logs.go:282] 0 containers: []
	W1222 01:41:11.891410 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:11.891416 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:11.891480 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:11.920553 1685746 cri.go:96] found id: ""
	I1222 01:41:11.920581 1685746 logs.go:282] 0 containers: []
	W1222 01:41:11.920590 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:11.920596 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:11.920657 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:11.948610 1685746 cri.go:96] found id: ""
	I1222 01:41:11.948634 1685746 logs.go:282] 0 containers: []
	W1222 01:41:11.948642 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:11.948651 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:11.948662 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:12.006298 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:12.006340 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:12.022860 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:12.022889 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:12.087185 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:12.078758    5361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:12.079176    5361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:12.080965    5361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:12.081458    5361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:12.083372    5361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:12.078758    5361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:12.079176    5361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:12.080965    5361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:12.081458    5361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:12.083372    5361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:12.087252 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:12.087282 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:12.112381 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:12.112415 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:41:14.645175 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:14.655581 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:14.655655 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:14.683086 1685746 cri.go:96] found id: ""
	I1222 01:41:14.683110 1685746 logs.go:282] 0 containers: []
	W1222 01:41:14.683118 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:14.683125 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:14.683192 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:14.708684 1685746 cri.go:96] found id: ""
	I1222 01:41:14.708707 1685746 logs.go:282] 0 containers: []
	W1222 01:41:14.708716 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:14.708723 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:14.708783 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:14.733550 1685746 cri.go:96] found id: ""
	I1222 01:41:14.733572 1685746 logs.go:282] 0 containers: []
	W1222 01:41:14.733580 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:14.733586 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:14.733653 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:14.762029 1685746 cri.go:96] found id: ""
	I1222 01:41:14.762052 1685746 logs.go:282] 0 containers: []
	W1222 01:41:14.762061 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:14.762068 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:14.762191 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:14.802569 1685746 cri.go:96] found id: ""
	I1222 01:41:14.802593 1685746 logs.go:282] 0 containers: []
	W1222 01:41:14.802602 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:14.802609 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:14.802668 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:14.829402 1685746 cri.go:96] found id: ""
	I1222 01:41:14.829425 1685746 logs.go:282] 0 containers: []
	W1222 01:41:14.829434 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:14.829440 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:14.829499 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:14.854254 1685746 cri.go:96] found id: ""
	I1222 01:41:14.854276 1685746 logs.go:282] 0 containers: []
	W1222 01:41:14.854285 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:14.854291 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:14.854350 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:14.879183 1685746 cri.go:96] found id: ""
	I1222 01:41:14.879205 1685746 logs.go:282] 0 containers: []
	W1222 01:41:14.879213 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:14.879222 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:14.879239 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:14.933758 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:14.933795 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:14.948809 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:14.948834 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:15.022478 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:15.005575    5474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:15.006312    5474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:15.008255    5474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:15.009052    5474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:15.011873    5474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:15.005575    5474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:15.006312    5474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:15.008255    5474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:15.009052    5474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:15.011873    5474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:15.022594 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:15.022610 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:15.071291 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:15.071336 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 01:41:14.248149 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:41:16.748036 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:41:17.608065 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:17.618810 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:17.618881 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:17.643606 1685746 cri.go:96] found id: ""
	I1222 01:41:17.643633 1685746 logs.go:282] 0 containers: []
	W1222 01:41:17.643643 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:17.643650 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:17.643760 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:17.669609 1685746 cri.go:96] found id: ""
	I1222 01:41:17.669639 1685746 logs.go:282] 0 containers: []
	W1222 01:41:17.669649 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:17.669656 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:17.669725 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:17.694910 1685746 cri.go:96] found id: ""
	I1222 01:41:17.694934 1685746 logs.go:282] 0 containers: []
	W1222 01:41:17.694943 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:17.694950 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:17.695009 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:17.721067 1685746 cri.go:96] found id: ""
	I1222 01:41:17.721101 1685746 logs.go:282] 0 containers: []
	W1222 01:41:17.721111 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:17.721118 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:17.721251 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:17.762594 1685746 cri.go:96] found id: ""
	I1222 01:41:17.762669 1685746 logs.go:282] 0 containers: []
	W1222 01:41:17.762691 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:17.762715 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:17.762802 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:17.806835 1685746 cri.go:96] found id: ""
	I1222 01:41:17.806870 1685746 logs.go:282] 0 containers: []
	W1222 01:41:17.806880 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:17.806887 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:17.806964 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:17.837236 1685746 cri.go:96] found id: ""
	I1222 01:41:17.837273 1685746 logs.go:282] 0 containers: []
	W1222 01:41:17.837284 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:17.837291 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:17.837362 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:17.867730 1685746 cri.go:96] found id: ""
	I1222 01:41:17.867802 1685746 logs.go:282] 0 containers: []
	W1222 01:41:17.867825 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:17.867840 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:17.867852 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:17.927517 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:17.927555 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:17.943454 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:17.943484 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:18.012436 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:18.001362    5586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:18.002881    5586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:18.003527    5586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:18.005497    5586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:18.006107    5586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:18.001362    5586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:18.002881    5586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:18.003527    5586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:18.005497    5586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:18.006107    5586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:18.012522 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:18.012553 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:18.040219 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:18.040262 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:41:20.572279 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:20.583193 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:20.583266 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:20.609051 1685746 cri.go:96] found id: ""
	I1222 01:41:20.609075 1685746 logs.go:282] 0 containers: []
	W1222 01:41:20.609083 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:20.609089 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:20.609150 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:20.635365 1685746 cri.go:96] found id: ""
	I1222 01:41:20.635391 1685746 logs.go:282] 0 containers: []
	W1222 01:41:20.635400 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:20.635406 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:20.635470 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:20.664505 1685746 cri.go:96] found id: ""
	I1222 01:41:20.664532 1685746 logs.go:282] 0 containers: []
	W1222 01:41:20.664541 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:20.664547 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:20.664609 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:20.690863 1685746 cri.go:96] found id: ""
	I1222 01:41:20.690887 1685746 logs.go:282] 0 containers: []
	W1222 01:41:20.690904 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:20.690916 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:20.690981 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:20.716167 1685746 cri.go:96] found id: ""
	I1222 01:41:20.716188 1685746 logs.go:282] 0 containers: []
	W1222 01:41:20.716196 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:20.716203 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:20.716262 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:20.758512 1685746 cri.go:96] found id: ""
	I1222 01:41:20.758538 1685746 logs.go:282] 0 containers: []
	W1222 01:41:20.758547 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:20.758554 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:20.758612 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:20.789839 1685746 cri.go:96] found id: ""
	I1222 01:41:20.789866 1685746 logs.go:282] 0 containers: []
	W1222 01:41:20.789875 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:20.789882 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:20.789944 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:20.823216 1685746 cri.go:96] found id: ""
	I1222 01:41:20.823244 1685746 logs.go:282] 0 containers: []
	W1222 01:41:20.823254 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:20.823263 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:20.823275 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:20.878834 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:20.878873 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:20.894375 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:20.894409 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:20.963456 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:20.954882    5700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:20.955717    5700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:20.957444    5700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:20.957756    5700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:20.959270    5700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:20.954882    5700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:20.955717    5700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:20.957444    5700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:20.957756    5700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:20.959270    5700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:20.963479 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:20.963518 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:20.992875 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:20.992916 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 01:41:18.748733 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:41:21.248234 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:41:23.526237 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:23.540126 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:23.540244 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:23.567806 1685746 cri.go:96] found id: ""
	I1222 01:41:23.567833 1685746 logs.go:282] 0 containers: []
	W1222 01:41:23.567842 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:23.567849 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:23.567915 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:23.594496 1685746 cri.go:96] found id: ""
	I1222 01:41:23.594525 1685746 logs.go:282] 0 containers: []
	W1222 01:41:23.594538 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:23.594546 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:23.594614 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:23.621007 1685746 cri.go:96] found id: ""
	I1222 01:41:23.621034 1685746 logs.go:282] 0 containers: []
	W1222 01:41:23.621043 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:23.621050 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:23.621111 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:23.646829 1685746 cri.go:96] found id: ""
	I1222 01:41:23.646857 1685746 logs.go:282] 0 containers: []
	W1222 01:41:23.646867 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:23.646874 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:23.646941 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:23.672993 1685746 cri.go:96] found id: ""
	I1222 01:41:23.673020 1685746 logs.go:282] 0 containers: []
	W1222 01:41:23.673030 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:23.673036 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:23.673099 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:23.704873 1685746 cri.go:96] found id: ""
	I1222 01:41:23.704901 1685746 logs.go:282] 0 containers: []
	W1222 01:41:23.704910 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:23.704916 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:23.704980 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:23.731220 1685746 cri.go:96] found id: ""
	I1222 01:41:23.731248 1685746 logs.go:282] 0 containers: []
	W1222 01:41:23.731259 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:23.731265 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:23.731330 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:23.769641 1685746 cri.go:96] found id: ""
	I1222 01:41:23.769669 1685746 logs.go:282] 0 containers: []
	W1222 01:41:23.769678 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:23.769687 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:23.769701 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:41:23.811900 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:23.811928 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:23.870851 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:23.870887 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:23.886411 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:23.886488 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:23.954566 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:23.945665    5826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:23.946254    5826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:23.948052    5826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:23.948665    5826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:23.950507    5826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:23.945665    5826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:23.946254    5826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:23.948052    5826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:23.948665    5826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:23.950507    5826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:23.954588 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:23.954602 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:26.483766 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:26.495024 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:26.495100 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:26.521679 1685746 cri.go:96] found id: ""
	I1222 01:41:26.521706 1685746 logs.go:282] 0 containers: []
	W1222 01:41:26.521716 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:26.521723 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:26.521786 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:26.552746 1685746 cri.go:96] found id: ""
	I1222 01:41:26.552773 1685746 logs.go:282] 0 containers: []
	W1222 01:41:26.552782 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:26.552789 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:26.552856 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:26.580045 1685746 cri.go:96] found id: ""
	I1222 01:41:26.580072 1685746 logs.go:282] 0 containers: []
	W1222 01:41:26.580082 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:26.580088 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:26.580151 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:26.606656 1685746 cri.go:96] found id: ""
	I1222 01:41:26.606683 1685746 logs.go:282] 0 containers: []
	W1222 01:41:26.606693 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:26.606700 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:26.606759 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:26.632499 1685746 cri.go:96] found id: ""
	I1222 01:41:26.632539 1685746 logs.go:282] 0 containers: []
	W1222 01:41:26.632548 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:26.632556 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:26.632640 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:26.664045 1685746 cri.go:96] found id: ""
	I1222 01:41:26.664072 1685746 logs.go:282] 0 containers: []
	W1222 01:41:26.664082 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:26.664089 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:26.664172 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	W1222 01:41:23.248384 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:41:25.748529 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:41:27.748967 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:41:26.689648 1685746 cri.go:96] found id: ""
	I1222 01:41:26.689672 1685746 logs.go:282] 0 containers: []
	W1222 01:41:26.689693 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:26.689704 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:26.689772 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:26.715926 1685746 cri.go:96] found id: ""
	I1222 01:41:26.715949 1685746 logs.go:282] 0 containers: []
	W1222 01:41:26.715958 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:26.715966 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:26.715977 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:26.779696 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:26.779785 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:26.802335 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:26.802412 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:26.866575 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:26.857663    5928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:26.858479    5928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:26.860256    5928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:26.860898    5928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:26.862675    5928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:26.857663    5928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:26.858479    5928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:26.860256    5928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:26.860898    5928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:26.862675    5928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:26.866599 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:26.866613 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:26.893136 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:26.893176 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:41:29.425895 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:29.438488 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:29.438569 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:29.467384 1685746 cri.go:96] found id: ""
	I1222 01:41:29.467415 1685746 logs.go:282] 0 containers: []
	W1222 01:41:29.467426 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:29.467432 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:29.467497 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:29.502253 1685746 cri.go:96] found id: ""
	I1222 01:41:29.502277 1685746 logs.go:282] 0 containers: []
	W1222 01:41:29.502285 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:29.502291 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:29.502351 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:29.538703 1685746 cri.go:96] found id: ""
	I1222 01:41:29.538730 1685746 logs.go:282] 0 containers: []
	W1222 01:41:29.538739 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:29.538747 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:29.538809 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:29.567395 1685746 cri.go:96] found id: ""
	I1222 01:41:29.567422 1685746 logs.go:282] 0 containers: []
	W1222 01:41:29.567431 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:29.567439 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:29.567500 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:29.595415 1685746 cri.go:96] found id: ""
	I1222 01:41:29.595493 1685746 logs.go:282] 0 containers: []
	W1222 01:41:29.595508 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:29.595516 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:29.595583 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:29.622583 1685746 cri.go:96] found id: ""
	I1222 01:41:29.622611 1685746 logs.go:282] 0 containers: []
	W1222 01:41:29.622620 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:29.622627 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:29.622693 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:29.649130 1685746 cri.go:96] found id: ""
	I1222 01:41:29.649156 1685746 logs.go:282] 0 containers: []
	W1222 01:41:29.649166 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:29.649173 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:29.649240 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:29.676205 1685746 cri.go:96] found id: ""
	I1222 01:41:29.676231 1685746 logs.go:282] 0 containers: []
	W1222 01:41:29.676240 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:29.676250 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:29.676279 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:29.731980 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:29.732016 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:29.747474 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:29.747503 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:29.833319 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:29.822964    6042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:29.824836    6042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:29.825723    6042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:29.827546    6042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:29.829272    6042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:29.822964    6042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:29.824836    6042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:29.825723    6042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:29.827546    6042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:29.829272    6042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:29.833342 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:29.833355 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:29.859398 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:29.859432 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 01:41:30.247999 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:41:32.248426 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:41:32.387755 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:32.398548 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:32.398639 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:32.422848 1685746 cri.go:96] found id: ""
	I1222 01:41:32.422870 1685746 logs.go:282] 0 containers: []
	W1222 01:41:32.422879 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:32.422885 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:32.422976 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:32.448126 1685746 cri.go:96] found id: ""
	I1222 01:41:32.448153 1685746 logs.go:282] 0 containers: []
	W1222 01:41:32.448162 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:32.448171 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:32.448233 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:32.476732 1685746 cri.go:96] found id: ""
	I1222 01:41:32.476769 1685746 logs.go:282] 0 containers: []
	W1222 01:41:32.476779 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:32.476785 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:32.476856 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:32.521856 1685746 cri.go:96] found id: ""
	I1222 01:41:32.521885 1685746 logs.go:282] 0 containers: []
	W1222 01:41:32.521915 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:32.521923 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:32.522010 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:32.559083 1685746 cri.go:96] found id: ""
	I1222 01:41:32.559112 1685746 logs.go:282] 0 containers: []
	W1222 01:41:32.559121 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:32.559128 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:32.559199 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:32.585037 1685746 cri.go:96] found id: ""
	I1222 01:41:32.585066 1685746 logs.go:282] 0 containers: []
	W1222 01:41:32.585076 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:32.585082 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:32.585142 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:32.611094 1685746 cri.go:96] found id: ""
	I1222 01:41:32.611117 1685746 logs.go:282] 0 containers: []
	W1222 01:41:32.611126 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:32.611132 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:32.611200 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:32.636572 1685746 cri.go:96] found id: ""
	I1222 01:41:32.636598 1685746 logs.go:282] 0 containers: []
	W1222 01:41:32.636606 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:32.636614 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:32.636626 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:32.691721 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:32.691756 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:32.706757 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:32.706791 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:32.784203 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:32.775781    6150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:32.776686    6150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:32.778481    6150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:32.778815    6150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:32.780276    6150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:32.775781    6150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:32.776686    6150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:32.778481    6150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:32.778815    6150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:32.780276    6150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:32.784277 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:32.784302 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:32.812067 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:32.812099 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:41:35.344181 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:35.354549 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:35.354621 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:35.378138 1685746 cri.go:96] found id: ""
	I1222 01:41:35.378160 1685746 logs.go:282] 0 containers: []
	W1222 01:41:35.378169 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:35.378177 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:35.378236 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:35.403725 1685746 cri.go:96] found id: ""
	I1222 01:41:35.403748 1685746 logs.go:282] 0 containers: []
	W1222 01:41:35.403757 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:35.403764 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:35.403825 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:35.429025 1685746 cri.go:96] found id: ""
	I1222 01:41:35.429050 1685746 logs.go:282] 0 containers: []
	W1222 01:41:35.429059 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:35.429066 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:35.429129 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:35.459607 1685746 cri.go:96] found id: ""
	I1222 01:41:35.459633 1685746 logs.go:282] 0 containers: []
	W1222 01:41:35.459642 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:35.459649 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:35.459707 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:35.483992 1685746 cri.go:96] found id: ""
	I1222 01:41:35.484015 1685746 logs.go:282] 0 containers: []
	W1222 01:41:35.484024 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:35.484031 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:35.484094 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:35.517254 1685746 cri.go:96] found id: ""
	I1222 01:41:35.517277 1685746 logs.go:282] 0 containers: []
	W1222 01:41:35.517286 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:35.517293 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:35.517353 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:35.546137 1685746 cri.go:96] found id: ""
	I1222 01:41:35.546219 1685746 logs.go:282] 0 containers: []
	W1222 01:41:35.546242 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:35.546284 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:35.546378 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:35.576307 1685746 cri.go:96] found id: ""
	I1222 01:41:35.576329 1685746 logs.go:282] 0 containers: []
	W1222 01:41:35.576338 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:35.576347 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:35.576358 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:35.631853 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:35.631887 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:35.646787 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:35.646827 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:35.713895 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:35.705676    6260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:35.706509    6260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:35.708048    6260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:35.708614    6260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:35.710294    6260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:35.705676    6260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:35.706509    6260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:35.708048    6260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:35.708614    6260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:35.710294    6260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:35.713927 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:35.713943 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:35.739168 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:35.739250 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 01:41:34.248875 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:41:36.748177 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:41:38.278358 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:38.289460 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:38.289534 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:38.316292 1685746 cri.go:96] found id: ""
	I1222 01:41:38.316320 1685746 logs.go:282] 0 containers: []
	W1222 01:41:38.316329 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:38.316336 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:38.316416 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:38.344932 1685746 cri.go:96] found id: ""
	I1222 01:41:38.344960 1685746 logs.go:282] 0 containers: []
	W1222 01:41:38.344969 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:38.344976 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:38.345038 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:38.371484 1685746 cri.go:96] found id: ""
	I1222 01:41:38.371509 1685746 logs.go:282] 0 containers: []
	W1222 01:41:38.371519 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:38.371525 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:38.371594 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:38.401114 1685746 cri.go:96] found id: ""
	I1222 01:41:38.401140 1685746 logs.go:282] 0 containers: []
	W1222 01:41:38.401149 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:38.401157 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:38.401217 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:38.427857 1685746 cri.go:96] found id: ""
	I1222 01:41:38.427881 1685746 logs.go:282] 0 containers: []
	W1222 01:41:38.427890 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:38.427897 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:38.427962 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:38.453333 1685746 cri.go:96] found id: ""
	I1222 01:41:38.453358 1685746 logs.go:282] 0 containers: []
	W1222 01:41:38.453367 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:38.453374 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:38.453455 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:38.477527 1685746 cri.go:96] found id: ""
	I1222 01:41:38.477610 1685746 logs.go:282] 0 containers: []
	W1222 01:41:38.477633 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:38.477655 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:38.477748 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:38.523741 1685746 cri.go:96] found id: ""
	I1222 01:41:38.523763 1685746 logs.go:282] 0 containers: []
	W1222 01:41:38.523772 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:38.523787 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:38.523798 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:38.595469 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:38.587236    6363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:38.587861    6363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:38.589425    6363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:38.589886    6363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:38.591514    6363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:38.587236    6363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:38.587861    6363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:38.589425    6363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:38.589886    6363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:38.591514    6363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:38.595491 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:38.595508 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:38.621769 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:38.621808 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:41:38.651477 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:38.651507 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:38.710896 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:38.710934 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:41.227040 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:41.237881 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:41.237954 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:41.265636 1685746 cri.go:96] found id: ""
	I1222 01:41:41.265671 1685746 logs.go:282] 0 containers: []
	W1222 01:41:41.265680 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:41.265687 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:41.265757 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:41.291304 1685746 cri.go:96] found id: ""
	I1222 01:41:41.291330 1685746 logs.go:282] 0 containers: []
	W1222 01:41:41.291339 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:41.291346 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:41.291414 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:41.316968 1685746 cri.go:96] found id: ""
	I1222 01:41:41.317003 1685746 logs.go:282] 0 containers: []
	W1222 01:41:41.317013 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:41.317020 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:41.317094 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:41.342750 1685746 cri.go:96] found id: ""
	I1222 01:41:41.342779 1685746 logs.go:282] 0 containers: []
	W1222 01:41:41.342794 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:41.342801 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:41.342865 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:41.368173 1685746 cri.go:96] found id: ""
	I1222 01:41:41.368197 1685746 logs.go:282] 0 containers: []
	W1222 01:41:41.368205 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:41.368212 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:41.368275 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:41.396263 1685746 cri.go:96] found id: ""
	I1222 01:41:41.396290 1685746 logs.go:282] 0 containers: []
	W1222 01:41:41.396300 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:41.396308 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:41.396380 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:41.424002 1685746 cri.go:96] found id: ""
	I1222 01:41:41.424028 1685746 logs.go:282] 0 containers: []
	W1222 01:41:41.424037 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:41.424044 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:41.424104 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:41.450858 1685746 cri.go:96] found id: ""
	I1222 01:41:41.450886 1685746 logs.go:282] 0 containers: []
	W1222 01:41:41.450894 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:41.450904 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:41.450915 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:41.510703 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:41.510785 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:41.529398 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:41.529475 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:41.596968 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:41.588901    6483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:41.589540    6483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:41.591264    6483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:41.591828    6483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:41.593389    6483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:41.588901    6483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:41.589540    6483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:41.591264    6483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:41.591828    6483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:41.593389    6483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:41.596989 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:41.597002 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:41.623436 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:41.623472 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 01:41:39.248106 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:41:41.748067 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:41:44.153585 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:44.164792 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:44.164865 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:44.190259 1685746 cri.go:96] found id: ""
	I1222 01:41:44.190282 1685746 logs.go:282] 0 containers: []
	W1222 01:41:44.190290 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:44.190297 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:44.190357 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:44.223886 1685746 cri.go:96] found id: ""
	I1222 01:41:44.223911 1685746 logs.go:282] 0 containers: []
	W1222 01:41:44.223922 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:44.223929 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:44.223988 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:44.249898 1685746 cri.go:96] found id: ""
	I1222 01:41:44.249922 1685746 logs.go:282] 0 containers: []
	W1222 01:41:44.249931 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:44.249948 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:44.250010 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:44.275190 1685746 cri.go:96] found id: ""
	I1222 01:41:44.275217 1685746 logs.go:282] 0 containers: []
	W1222 01:41:44.275227 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:44.275233 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:44.275325 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:44.301198 1685746 cri.go:96] found id: ""
	I1222 01:41:44.301221 1685746 logs.go:282] 0 containers: []
	W1222 01:41:44.301230 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:44.301237 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:44.301311 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:44.325952 1685746 cri.go:96] found id: ""
	I1222 01:41:44.325990 1685746 logs.go:282] 0 containers: []
	W1222 01:41:44.326000 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:44.326023 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:44.326154 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:44.352189 1685746 cri.go:96] found id: ""
	I1222 01:41:44.352227 1685746 logs.go:282] 0 containers: []
	W1222 01:41:44.352236 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:44.352259 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:44.352334 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:44.377820 1685746 cri.go:96] found id: ""
	I1222 01:41:44.377848 1685746 logs.go:282] 0 containers: []
	W1222 01:41:44.377858 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:44.377868 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:44.377879 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:44.393230 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:44.393258 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:44.463151 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:44.454750    6594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:44.455259    6594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:44.456851    6594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:44.457499    6594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:44.458487    6594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:44.454750    6594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:44.455259    6594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:44.456851    6594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:44.457499    6594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:44.458487    6594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:44.463175 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:44.463188 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:44.488611 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:44.488690 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:41:44.523935 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:44.524011 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1222 01:41:44.248599 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:41:46.748094 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:41:47.091277 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:47.102299 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:47.102374 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:47.128309 1685746 cri.go:96] found id: ""
	I1222 01:41:47.128334 1685746 logs.go:282] 0 containers: []
	W1222 01:41:47.128344 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:47.128351 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:47.128431 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:47.154429 1685746 cri.go:96] found id: ""
	I1222 01:41:47.154456 1685746 logs.go:282] 0 containers: []
	W1222 01:41:47.154465 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:47.154473 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:47.154535 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:47.179829 1685746 cri.go:96] found id: ""
	I1222 01:41:47.179856 1685746 logs.go:282] 0 containers: []
	W1222 01:41:47.179865 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:47.179872 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:47.179933 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:47.204965 1685746 cri.go:96] found id: ""
	I1222 01:41:47.204999 1685746 logs.go:282] 0 containers: []
	W1222 01:41:47.205009 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:47.205016 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:47.205088 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:47.231912 1685746 cri.go:96] found id: ""
	I1222 01:41:47.231939 1685746 logs.go:282] 0 containers: []
	W1222 01:41:47.231949 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:47.231955 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:47.232043 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:47.262187 1685746 cri.go:96] found id: ""
	I1222 01:41:47.262215 1685746 logs.go:282] 0 containers: []
	W1222 01:41:47.262230 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:47.262237 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:47.262301 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:47.287536 1685746 cri.go:96] found id: ""
	I1222 01:41:47.287567 1685746 logs.go:282] 0 containers: []
	W1222 01:41:47.287577 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:47.287583 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:47.287648 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:47.313516 1685746 cri.go:96] found id: ""
	I1222 01:41:47.313544 1685746 logs.go:282] 0 containers: []
	W1222 01:41:47.313553 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:47.313563 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:47.313573 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:47.369295 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:47.369329 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:47.387169 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:47.387197 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:47.455311 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:47.445096    6709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:47.445837    6709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:47.447629    6709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:47.448117    6709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:47.449732    6709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:47.445096    6709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:47.445837    6709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:47.447629    6709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:47.448117    6709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:47.449732    6709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:47.455335 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:47.455347 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:47.481041 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:47.481078 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:41:50.030868 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:50.043616 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:50.043692 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:50.072180 1685746 cri.go:96] found id: ""
	I1222 01:41:50.072210 1685746 logs.go:282] 0 containers: []
	W1222 01:41:50.072220 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:50.072229 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:50.072297 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:50.100979 1685746 cri.go:96] found id: ""
	I1222 01:41:50.101005 1685746 logs.go:282] 0 containers: []
	W1222 01:41:50.101014 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:50.101021 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:50.101091 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:50.128360 1685746 cri.go:96] found id: ""
	I1222 01:41:50.128392 1685746 logs.go:282] 0 containers: []
	W1222 01:41:50.128404 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:50.128411 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:50.128476 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:50.154912 1685746 cri.go:96] found id: ""
	I1222 01:41:50.154945 1685746 logs.go:282] 0 containers: []
	W1222 01:41:50.154955 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:50.154963 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:50.155033 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:50.181433 1685746 cri.go:96] found id: ""
	I1222 01:41:50.181465 1685746 logs.go:282] 0 containers: []
	W1222 01:41:50.181474 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:50.181483 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:50.181553 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:50.207260 1685746 cri.go:96] found id: ""
	I1222 01:41:50.207289 1685746 logs.go:282] 0 containers: []
	W1222 01:41:50.207299 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:50.207305 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:50.207366 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:50.234601 1685746 cri.go:96] found id: ""
	I1222 01:41:50.234649 1685746 logs.go:282] 0 containers: []
	W1222 01:41:50.234659 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:50.234666 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:50.234744 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:50.264579 1685746 cri.go:96] found id: ""
	I1222 01:41:50.264621 1685746 logs.go:282] 0 containers: []
	W1222 01:41:50.264631 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:50.264641 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:50.264661 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:50.321078 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:50.321112 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:50.336044 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:50.336069 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:50.401373 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:50.392422    6821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:50.393059    6821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:50.394644    6821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:50.395244    6821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:50.396998    6821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:50.392422    6821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:50.393059    6821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:50.394644    6821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:50.395244    6821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:50.396998    6821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:50.401396 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:50.401410 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:50.428108 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:50.428151 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 01:41:48.749155 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:41:51.248977 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:41:52.958393 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:52.969793 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:52.969867 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:53.021307 1685746 cri.go:96] found id: ""
	I1222 01:41:53.021331 1685746 logs.go:282] 0 containers: []
	W1222 01:41:53.021340 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:53.021352 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:53.021415 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:53.053765 1685746 cri.go:96] found id: ""
	I1222 01:41:53.053789 1685746 logs.go:282] 0 containers: []
	W1222 01:41:53.053798 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:53.053804 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:53.053872 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:53.079107 1685746 cri.go:96] found id: ""
	I1222 01:41:53.079135 1685746 logs.go:282] 0 containers: []
	W1222 01:41:53.079144 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:53.079152 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:53.079214 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:53.106101 1685746 cri.go:96] found id: ""
	I1222 01:41:53.106130 1685746 logs.go:282] 0 containers: []
	W1222 01:41:53.106138 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:53.106145 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:53.106209 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:53.135616 1685746 cri.go:96] found id: ""
	I1222 01:41:53.135643 1685746 logs.go:282] 0 containers: []
	W1222 01:41:53.135652 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:53.135659 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:53.135766 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:53.160318 1685746 cri.go:96] found id: ""
	I1222 01:41:53.160344 1685746 logs.go:282] 0 containers: []
	W1222 01:41:53.160353 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:53.160360 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:53.160451 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:53.185257 1685746 cri.go:96] found id: ""
	I1222 01:41:53.185297 1685746 logs.go:282] 0 containers: []
	W1222 01:41:53.185306 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:53.185313 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:53.185401 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:53.210753 1685746 cri.go:96] found id: ""
	I1222 01:41:53.210824 1685746 logs.go:282] 0 containers: []
	W1222 01:41:53.210839 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:53.210855 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:53.210867 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:53.237290 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:53.237323 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:41:53.267342 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:53.267374 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:53.323394 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:53.323429 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:53.339435 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:53.339465 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:53.403286 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:53.395297    6950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:53.395794    6950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:53.397429    6950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:53.397875    6950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:53.399414    6950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:53.395297    6950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:53.395794    6950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:53.397429    6950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:53.397875    6950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:53.399414    6950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:55.903619 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:55.914760 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:55.914836 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:55.939507 1685746 cri.go:96] found id: ""
	I1222 01:41:55.939532 1685746 logs.go:282] 0 containers: []
	W1222 01:41:55.939541 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:55.939548 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:55.939614 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:55.965607 1685746 cri.go:96] found id: ""
	I1222 01:41:55.965633 1685746 logs.go:282] 0 containers: []
	W1222 01:41:55.965643 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:55.965649 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:55.965715 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:56.006138 1685746 cri.go:96] found id: ""
	I1222 01:41:56.006171 1685746 logs.go:282] 0 containers: []
	W1222 01:41:56.006181 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:56.006188 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:56.006256 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:56.040087 1685746 cri.go:96] found id: ""
	I1222 01:41:56.040116 1685746 logs.go:282] 0 containers: []
	W1222 01:41:56.040125 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:56.040131 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:56.040191 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:56.068695 1685746 cri.go:96] found id: ""
	I1222 01:41:56.068719 1685746 logs.go:282] 0 containers: []
	W1222 01:41:56.068727 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:56.068734 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:56.068795 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:56.096726 1685746 cri.go:96] found id: ""
	I1222 01:41:56.096808 1685746 logs.go:282] 0 containers: []
	W1222 01:41:56.096832 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:56.096854 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:56.096963 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:56.125548 1685746 cri.go:96] found id: ""
	I1222 01:41:56.125627 1685746 logs.go:282] 0 containers: []
	W1222 01:41:56.125652 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:56.125675 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:56.125763 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:56.150956 1685746 cri.go:96] found id: ""
	I1222 01:41:56.150986 1685746 logs.go:282] 0 containers: []
	W1222 01:41:56.150995 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:56.151005 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:56.151049 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:56.216560 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:56.208477    7045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:56.209149    7045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:56.210844    7045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:56.211410    7045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:56.212923    7045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:56.208477    7045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:56.209149    7045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:56.210844    7045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:56.211410    7045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:56.212923    7045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:56.216581 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:56.216594 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:56.242334 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:56.242368 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:41:56.270763 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:56.270793 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:56.325996 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:56.326038 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1222 01:41:53.748987 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:41:56.248859 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:41:58.841618 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:58.852321 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:58.852411 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:58.877439 1685746 cri.go:96] found id: ""
	I1222 01:41:58.877465 1685746 logs.go:282] 0 containers: []
	W1222 01:41:58.877475 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:58.877482 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:58.877542 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:58.902343 1685746 cri.go:96] found id: ""
	I1222 01:41:58.902369 1685746 logs.go:282] 0 containers: []
	W1222 01:41:58.902378 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:58.902385 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:58.902443 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:58.927733 1685746 cri.go:96] found id: ""
	I1222 01:41:58.927758 1685746 logs.go:282] 0 containers: []
	W1222 01:41:58.927767 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:58.927774 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:58.927834 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:58.954349 1685746 cri.go:96] found id: ""
	I1222 01:41:58.954374 1685746 logs.go:282] 0 containers: []
	W1222 01:41:58.954384 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:58.954391 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:58.954464 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:58.984449 1685746 cri.go:96] found id: ""
	I1222 01:41:58.984519 1685746 logs.go:282] 0 containers: []
	W1222 01:41:58.984533 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:58.984541 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:58.984612 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:59.020245 1685746 cri.go:96] found id: ""
	I1222 01:41:59.020277 1685746 logs.go:282] 0 containers: []
	W1222 01:41:59.020294 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:59.020303 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:59.020387 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:59.059067 1685746 cri.go:96] found id: ""
	I1222 01:41:59.059135 1685746 logs.go:282] 0 containers: []
	W1222 01:41:59.059157 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:59.059170 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:59.059244 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:59.090327 1685746 cri.go:96] found id: ""
	I1222 01:41:59.090355 1685746 logs.go:282] 0 containers: []
	W1222 01:41:59.090364 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:59.090372 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:59.090384 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:59.149768 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:59.149809 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:59.164825 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:59.164857 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:59.232698 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:59.223745    7163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:59.224279    7163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:59.226992    7163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:59.227434    7163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:59.228967    7163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:59.223745    7163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:59.224279    7163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:59.226992    7163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:59.227434    7163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:59.228967    7163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:59.232720 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:59.232734 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:59.258805 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:59.258840 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 01:41:58.748026 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:42:00.748292 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:42:01.787611 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:42:01.799088 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:42:01.799206 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:42:01.829442 1685746 cri.go:96] found id: ""
	I1222 01:42:01.829521 1685746 logs.go:282] 0 containers: []
	W1222 01:42:01.829543 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:42:01.829566 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:42:01.829657 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:42:01.856095 1685746 cri.go:96] found id: ""
	I1222 01:42:01.856122 1685746 logs.go:282] 0 containers: []
	W1222 01:42:01.856132 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:42:01.856139 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:42:01.856203 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:42:01.882443 1685746 cri.go:96] found id: ""
	I1222 01:42:01.882469 1685746 logs.go:282] 0 containers: []
	W1222 01:42:01.882478 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:42:01.882485 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:42:01.882549 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:42:01.908008 1685746 cri.go:96] found id: ""
	I1222 01:42:01.908033 1685746 logs.go:282] 0 containers: []
	W1222 01:42:01.908043 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:42:01.908049 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:42:01.908111 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:42:01.934350 1685746 cri.go:96] found id: ""
	I1222 01:42:01.934377 1685746 logs.go:282] 0 containers: []
	W1222 01:42:01.934386 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:42:01.934393 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:42:01.934457 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:42:01.960407 1685746 cri.go:96] found id: ""
	I1222 01:42:01.960433 1685746 logs.go:282] 0 containers: []
	W1222 01:42:01.960442 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:42:01.960449 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:42:01.960512 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:42:01.988879 1685746 cri.go:96] found id: ""
	I1222 01:42:01.988915 1685746 logs.go:282] 0 containers: []
	W1222 01:42:01.988925 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:42:01.988931 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:42:01.989000 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:42:02.021404 1685746 cri.go:96] found id: ""
	I1222 01:42:02.021444 1685746 logs.go:282] 0 containers: []
	W1222 01:42:02.021454 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:42:02.021464 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:42:02.021476 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:42:02.053252 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:42:02.053282 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:42:02.111509 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:42:02.111548 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:42:02.127002 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:42:02.127081 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:42:02.196408 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:42:02.188126    7282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:02.188871    7282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:02.190476    7282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:02.190840    7282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:02.192032    7282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:42:02.188126    7282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:02.188871    7282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:02.190476    7282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:02.190840    7282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:02.192032    7282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:42:02.196429 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:42:02.196442 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:42:04.723107 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:42:04.734699 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:42:04.734786 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:42:04.771439 1685746 cri.go:96] found id: ""
	I1222 01:42:04.771462 1685746 logs.go:282] 0 containers: []
	W1222 01:42:04.771471 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:42:04.771477 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:42:04.771540 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:42:04.806612 1685746 cri.go:96] found id: ""
	I1222 01:42:04.806639 1685746 logs.go:282] 0 containers: []
	W1222 01:42:04.806648 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:42:04.806655 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:42:04.806714 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:42:04.832290 1685746 cri.go:96] found id: ""
	I1222 01:42:04.832320 1685746 logs.go:282] 0 containers: []
	W1222 01:42:04.832329 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:42:04.832336 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:42:04.832404 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:42:04.860422 1685746 cri.go:96] found id: ""
	I1222 01:42:04.860460 1685746 logs.go:282] 0 containers: []
	W1222 01:42:04.860469 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:42:04.860494 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:42:04.860603 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:42:04.885397 1685746 cri.go:96] found id: ""
	I1222 01:42:04.885424 1685746 logs.go:282] 0 containers: []
	W1222 01:42:04.885433 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:42:04.885440 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:42:04.885524 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:42:04.910499 1685746 cri.go:96] found id: ""
	I1222 01:42:04.910529 1685746 logs.go:282] 0 containers: []
	W1222 01:42:04.910539 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:42:04.910546 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:42:04.910607 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:42:04.934849 1685746 cri.go:96] found id: ""
	I1222 01:42:04.934887 1685746 logs.go:282] 0 containers: []
	W1222 01:42:04.934897 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:42:04.934921 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:42:04.935013 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:42:04.964384 1685746 cri.go:96] found id: ""
	I1222 01:42:04.964411 1685746 logs.go:282] 0 containers: []
	W1222 01:42:04.964420 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:42:04.964429 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:42:04.964460 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:42:05.023249 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:42:05.023347 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:42:05.042677 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:42:05.042702 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:42:05.113125 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:42:05.104084    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:05.104668    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:05.106408    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:05.106980    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:05.108789    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:42:05.104084    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:05.104668    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:05.106408    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:05.106980    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:05.108789    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:42:05.113151 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:42:05.113167 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:42:05.139072 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:42:05.139109 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 01:42:03.248327 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:42:05.748676 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:42:07.672253 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:42:07.683433 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:42:07.683523 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:42:07.710000 1685746 cri.go:96] found id: ""
	I1222 01:42:07.710025 1685746 logs.go:282] 0 containers: []
	W1222 01:42:07.710033 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:42:07.710040 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:42:07.710129 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:42:07.749657 1685746 cri.go:96] found id: ""
	I1222 01:42:07.749685 1685746 logs.go:282] 0 containers: []
	W1222 01:42:07.749695 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:42:07.749702 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:42:07.749769 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:42:07.779817 1685746 cri.go:96] found id: ""
	I1222 01:42:07.779844 1685746 logs.go:282] 0 containers: []
	W1222 01:42:07.779853 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:42:07.779860 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:42:07.779920 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:42:07.809501 1685746 cri.go:96] found id: ""
	I1222 01:42:07.809529 1685746 logs.go:282] 0 containers: []
	W1222 01:42:07.809538 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:42:07.809546 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:42:07.809606 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:42:07.834291 1685746 cri.go:96] found id: ""
	I1222 01:42:07.834318 1685746 logs.go:282] 0 containers: []
	W1222 01:42:07.834327 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:42:07.834334 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:42:07.834395 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:42:07.859724 1685746 cri.go:96] found id: ""
	I1222 01:42:07.859791 1685746 logs.go:282] 0 containers: []
	W1222 01:42:07.859807 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:42:07.859814 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:42:07.859874 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:42:07.891259 1685746 cri.go:96] found id: ""
	I1222 01:42:07.891287 1685746 logs.go:282] 0 containers: []
	W1222 01:42:07.891296 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:42:07.891303 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:42:07.891362 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:42:07.916371 1685746 cri.go:96] found id: ""
	I1222 01:42:07.916451 1685746 logs.go:282] 0 containers: []
	W1222 01:42:07.916467 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:42:07.916477 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:42:07.916489 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:42:07.943955 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:42:07.943981 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:42:08.000957 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:42:08.001003 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:42:08.021265 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:42:08.021299 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:42:08.098699 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:42:08.089767    7510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:08.090657    7510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:08.092419    7510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:08.093123    7510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:08.094829    7510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:42:08.089767    7510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:08.090657    7510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:08.092419    7510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:08.093123    7510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:08.094829    7510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:42:08.098725 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:42:08.098739 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:42:10.625986 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:42:10.637185 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:42:10.637275 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:42:10.663011 1685746 cri.go:96] found id: ""
	I1222 01:42:10.663039 1685746 logs.go:282] 0 containers: []
	W1222 01:42:10.663048 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:42:10.663055 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:42:10.663121 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:42:10.689593 1685746 cri.go:96] found id: ""
	I1222 01:42:10.689623 1685746 logs.go:282] 0 containers: []
	W1222 01:42:10.689633 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:42:10.689639 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:42:10.689704 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:42:10.718520 1685746 cri.go:96] found id: ""
	I1222 01:42:10.718545 1685746 logs.go:282] 0 containers: []
	W1222 01:42:10.718554 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:42:10.718561 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:42:10.718627 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:42:10.748796 1685746 cri.go:96] found id: ""
	I1222 01:42:10.748829 1685746 logs.go:282] 0 containers: []
	W1222 01:42:10.748839 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:42:10.748846 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:42:10.748919 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:42:10.780456 1685746 cri.go:96] found id: ""
	I1222 01:42:10.780493 1685746 logs.go:282] 0 containers: []
	W1222 01:42:10.780508 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:42:10.780515 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:42:10.780591 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:42:10.810196 1685746 cri.go:96] found id: ""
	I1222 01:42:10.810234 1685746 logs.go:282] 0 containers: []
	W1222 01:42:10.810243 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:42:10.810250 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:42:10.810346 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:42:10.836475 1685746 cri.go:96] found id: ""
	I1222 01:42:10.836502 1685746 logs.go:282] 0 containers: []
	W1222 01:42:10.836511 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:42:10.836518 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:42:10.836582 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:42:10.862222 1685746 cri.go:96] found id: ""
	I1222 01:42:10.862246 1685746 logs.go:282] 0 containers: []
	W1222 01:42:10.862255 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:42:10.862264 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:42:10.862275 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:42:10.918613 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:42:10.918648 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:42:10.933449 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:42:10.933478 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:42:11.013628 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:42:11.003467    7604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:11.004191    7604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:11.006270    7604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:11.007253    7604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:11.009218    7604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:42:11.003467    7604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:11.004191    7604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:11.006270    7604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:11.007253    7604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:11.009218    7604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:42:11.013706 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:42:11.013738 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:42:11.042713 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:42:11.042803 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 01:42:08.248287 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:42:10.748100 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:42:12.748911 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:42:13.581897 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:42:13.592897 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:42:13.592969 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:42:13.621158 1685746 cri.go:96] found id: ""
	I1222 01:42:13.621184 1685746 logs.go:282] 0 containers: []
	W1222 01:42:13.621194 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:42:13.621200 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:42:13.621265 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:42:13.646742 1685746 cri.go:96] found id: ""
	I1222 01:42:13.646769 1685746 logs.go:282] 0 containers: []
	W1222 01:42:13.646778 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:42:13.646784 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:42:13.646843 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:42:13.671981 1685746 cri.go:96] found id: ""
	I1222 01:42:13.672014 1685746 logs.go:282] 0 containers: []
	W1222 01:42:13.672023 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:42:13.672030 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:42:13.672093 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:42:13.697359 1685746 cri.go:96] found id: ""
	I1222 01:42:13.697387 1685746 logs.go:282] 0 containers: []
	W1222 01:42:13.697397 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:42:13.697408 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:42:13.697471 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:42:13.723455 1685746 cri.go:96] found id: ""
	I1222 01:42:13.723481 1685746 logs.go:282] 0 containers: []
	W1222 01:42:13.723491 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:42:13.723499 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:42:13.723560 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:42:13.762227 1685746 cri.go:96] found id: ""
	I1222 01:42:13.762251 1685746 logs.go:282] 0 containers: []
	W1222 01:42:13.762259 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:42:13.762266 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:42:13.762325 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:42:13.792416 1685746 cri.go:96] found id: ""
	I1222 01:42:13.792440 1685746 logs.go:282] 0 containers: []
	W1222 01:42:13.792448 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:42:13.792455 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:42:13.792521 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:42:13.824151 1685746 cri.go:96] found id: ""
	I1222 01:42:13.824178 1685746 logs.go:282] 0 containers: []
	W1222 01:42:13.824188 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:42:13.824227 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:42:13.824251 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:42:13.839610 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:42:13.839639 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:42:13.903103 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:42:13.894591    7715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:13.895392    7715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:13.896942    7715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:13.897426    7715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:13.898945    7715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:42:13.894591    7715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:13.895392    7715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:13.896942    7715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:13.897426    7715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:13.898945    7715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:42:13.903125 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:42:13.903138 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:42:13.928958 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:42:13.928992 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:42:13.959685 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:42:13.959714 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:42:16.518219 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:42:16.529223 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:42:16.529294 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:42:16.555927 1685746 cri.go:96] found id: ""
	I1222 01:42:16.555953 1685746 logs.go:282] 0 containers: []
	W1222 01:42:16.555962 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:42:16.555969 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:42:16.556028 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:42:16.581196 1685746 cri.go:96] found id: ""
	I1222 01:42:16.581223 1685746 logs.go:282] 0 containers: []
	W1222 01:42:16.581233 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:42:16.581240 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:42:16.581303 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:42:16.607543 1685746 cri.go:96] found id: ""
	I1222 01:42:16.607569 1685746 logs.go:282] 0 containers: []
	W1222 01:42:16.607578 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:42:16.607585 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:42:16.607651 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:42:16.637077 1685746 cri.go:96] found id: ""
	I1222 01:42:16.637106 1685746 logs.go:282] 0 containers: []
	W1222 01:42:16.637116 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:42:16.637123 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:42:16.637183 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:42:16.662155 1685746 cri.go:96] found id: ""
	I1222 01:42:16.662178 1685746 logs.go:282] 0 containers: []
	W1222 01:42:16.662187 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:42:16.662193 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:42:16.662257 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	W1222 01:42:14.749008 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:42:17.249086 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:42:16.694483 1685746 cri.go:96] found id: ""
	I1222 01:42:16.694507 1685746 logs.go:282] 0 containers: []
	W1222 01:42:16.694516 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:42:16.694523 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:42:16.694582 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:42:16.719153 1685746 cri.go:96] found id: ""
	I1222 01:42:16.719178 1685746 logs.go:282] 0 containers: []
	W1222 01:42:16.719188 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:42:16.719195 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:42:16.719258 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:42:16.750982 1685746 cri.go:96] found id: ""
	I1222 01:42:16.751007 1685746 logs.go:282] 0 containers: []
	W1222 01:42:16.751017 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:42:16.751026 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:42:16.751038 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:42:16.809848 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:42:16.809888 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:42:16.828821 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:42:16.828852 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:42:16.896032 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:42:16.888151    7829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:16.888893    7829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:16.890527    7829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:16.890859    7829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:16.892403    7829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:42:16.888151    7829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:16.888893    7829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:16.890527    7829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:16.890859    7829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:16.892403    7829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:42:16.896058 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:42:16.896071 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:42:16.921650 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:42:16.921686 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:42:19.450391 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:42:19.461241 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:42:19.461314 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:42:19.488679 1685746 cri.go:96] found id: ""
	I1222 01:42:19.488705 1685746 logs.go:282] 0 containers: []
	W1222 01:42:19.488715 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:42:19.488722 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:42:19.488784 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:42:19.514947 1685746 cri.go:96] found id: ""
	I1222 01:42:19.514972 1685746 logs.go:282] 0 containers: []
	W1222 01:42:19.514982 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:42:19.514989 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:42:19.515051 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:42:19.541761 1685746 cri.go:96] found id: ""
	I1222 01:42:19.541786 1685746 logs.go:282] 0 containers: []
	W1222 01:42:19.541795 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:42:19.541802 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:42:19.541867 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:42:19.566418 1685746 cri.go:96] found id: ""
	I1222 01:42:19.566441 1685746 logs.go:282] 0 containers: []
	W1222 01:42:19.566450 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:42:19.566456 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:42:19.566515 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:42:19.591707 1685746 cri.go:96] found id: ""
	I1222 01:42:19.591739 1685746 logs.go:282] 0 containers: []
	W1222 01:42:19.591748 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:42:19.591754 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:42:19.591857 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:42:19.618308 1685746 cri.go:96] found id: ""
	I1222 01:42:19.618343 1685746 logs.go:282] 0 containers: []
	W1222 01:42:19.618352 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:42:19.618362 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:42:19.618441 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:42:19.644750 1685746 cri.go:96] found id: ""
	I1222 01:42:19.644791 1685746 logs.go:282] 0 containers: []
	W1222 01:42:19.644801 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:42:19.644808 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:42:19.644883 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:42:19.674267 1685746 cri.go:96] found id: ""
	I1222 01:42:19.674295 1685746 logs.go:282] 0 containers: []
	W1222 01:42:19.674304 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:42:19.674315 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:42:19.674327 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:42:19.689360 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:42:19.689445 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:42:19.766188 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:42:19.757030    7943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:19.758928    7943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:19.760510    7943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:19.760846    7943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:19.762319    7943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:42:19.757030    7943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:19.758928    7943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:19.760510    7943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:19.760846    7943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:19.762319    7943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:42:19.766263 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:42:19.766290 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:42:19.793580 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:42:19.793657 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:42:19.829853 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:42:19.829884 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1222 01:42:19.748284 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:42:22.248100 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:42:22.388471 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:42:22.399089 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:42:22.399192 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:42:22.428498 1685746 cri.go:96] found id: ""
	I1222 01:42:22.428569 1685746 logs.go:282] 0 containers: []
	W1222 01:42:22.428583 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:42:22.428591 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:42:22.428672 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:42:22.458145 1685746 cri.go:96] found id: ""
	I1222 01:42:22.458182 1685746 logs.go:282] 0 containers: []
	W1222 01:42:22.458196 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:42:22.458203 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:42:22.458276 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:42:22.485165 1685746 cri.go:96] found id: ""
	I1222 01:42:22.485202 1685746 logs.go:282] 0 containers: []
	W1222 01:42:22.485212 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:42:22.485218 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:42:22.485283 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:42:22.510263 1685746 cri.go:96] found id: ""
	I1222 01:42:22.510292 1685746 logs.go:282] 0 containers: []
	W1222 01:42:22.510302 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:42:22.510308 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:42:22.510374 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:42:22.539347 1685746 cri.go:96] found id: ""
	I1222 01:42:22.539374 1685746 logs.go:282] 0 containers: []
	W1222 01:42:22.539383 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:42:22.539391 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:42:22.539453 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:42:22.564154 1685746 cri.go:96] found id: ""
	I1222 01:42:22.564182 1685746 logs.go:282] 0 containers: []
	W1222 01:42:22.564193 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:42:22.564205 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:42:22.564311 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:42:22.593661 1685746 cri.go:96] found id: ""
	I1222 01:42:22.593688 1685746 logs.go:282] 0 containers: []
	W1222 01:42:22.593697 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:42:22.593703 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:42:22.593767 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:42:22.618629 1685746 cri.go:96] found id: ""
	I1222 01:42:22.618654 1685746 logs.go:282] 0 containers: []
	W1222 01:42:22.618663 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:42:22.618672 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:42:22.618714 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:42:22.675019 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:42:22.675057 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:42:22.690208 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:42:22.690241 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:42:22.759102 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:42:22.749007    8059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:22.749809    8059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:22.751450    8059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:22.752099    8059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:22.753804    8059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:42:22.749007    8059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:22.749809    8059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:22.751450    8059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:22.752099    8059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:22.753804    8059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:42:22.759127 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:42:22.759140 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:42:22.790419 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:42:22.790453 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:42:25.330239 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:42:25.341121 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:42:25.341190 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:42:25.370417 1685746 cri.go:96] found id: ""
	I1222 01:42:25.370493 1685746 logs.go:282] 0 containers: []
	W1222 01:42:25.370523 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:42:25.370543 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:42:25.370636 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:42:25.399975 1685746 cri.go:96] found id: ""
	I1222 01:42:25.400000 1685746 logs.go:282] 0 containers: []
	W1222 01:42:25.400009 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:42:25.400015 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:42:25.400075 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:42:25.424384 1685746 cri.go:96] found id: ""
	I1222 01:42:25.424414 1685746 logs.go:282] 0 containers: []
	W1222 01:42:25.424424 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:42:25.424431 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:42:25.424491 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:42:25.453828 1685746 cri.go:96] found id: ""
	I1222 01:42:25.453916 1685746 logs.go:282] 0 containers: []
	W1222 01:42:25.453956 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:42:25.453984 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:42:25.454124 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:42:25.480847 1685746 cri.go:96] found id: ""
	I1222 01:42:25.480868 1685746 logs.go:282] 0 containers: []
	W1222 01:42:25.480877 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:42:25.480883 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:42:25.480942 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:42:25.508776 1685746 cri.go:96] found id: ""
	I1222 01:42:25.508801 1685746 logs.go:282] 0 containers: []
	W1222 01:42:25.508810 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:42:25.508817 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:42:25.508877 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:42:25.539362 1685746 cri.go:96] found id: ""
	I1222 01:42:25.539385 1685746 logs.go:282] 0 containers: []
	W1222 01:42:25.539396 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:42:25.539402 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:42:25.539461 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:42:25.566615 1685746 cri.go:96] found id: ""
	I1222 01:42:25.566641 1685746 logs.go:282] 0 containers: []
	W1222 01:42:25.566650 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:42:25.566659 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:42:25.566670 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:42:25.622750 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:42:25.622784 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:42:25.638693 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:42:25.638728 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:42:25.702796 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:42:25.695589    8173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:25.695998    8173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:25.697467    8173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:25.697800    8173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:25.699229    8173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:42:25.695589    8173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:25.695998    8173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:25.697467    8173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:25.697800    8173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:25.699229    8173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:42:25.702823 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:42:25.702835 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:42:25.727901 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:42:25.727938 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 01:42:24.248221 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:42:26.748069 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:42:29.248000 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:42:31.247763 1681323 node_ready.go:38] duration metric: took 6m0.000217195s for node "no-preload-154186" to be "Ready" ...
	I1222 01:42:31.251066 1681323 out.go:203] 
	W1222 01:42:31.253946 1681323 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1222 01:42:31.253969 1681323 out.go:285] * 
	W1222 01:42:31.256107 1681323 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1222 01:42:31.259342 1681323 out.go:203] 
	I1222 01:42:28.269113 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:42:28.280220 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:42:28.280317 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:42:28.305926 1685746 cri.go:96] found id: ""
	I1222 01:42:28.305948 1685746 logs.go:282] 0 containers: []
	W1222 01:42:28.305957 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:42:28.305963 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:42:28.306020 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:42:28.330985 1685746 cri.go:96] found id: ""
	I1222 01:42:28.331010 1685746 logs.go:282] 0 containers: []
	W1222 01:42:28.331020 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:42:28.331026 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:42:28.331086 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:42:28.357992 1685746 cri.go:96] found id: ""
	I1222 01:42:28.358018 1685746 logs.go:282] 0 containers: []
	W1222 01:42:28.358028 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:42:28.358035 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:42:28.358131 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:42:28.384559 1685746 cri.go:96] found id: ""
	I1222 01:42:28.384585 1685746 logs.go:282] 0 containers: []
	W1222 01:42:28.384594 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:42:28.384603 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:42:28.384665 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:42:28.412628 1685746 cri.go:96] found id: ""
	I1222 01:42:28.412650 1685746 logs.go:282] 0 containers: []
	W1222 01:42:28.412659 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:42:28.412665 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:42:28.412731 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:42:28.438582 1685746 cri.go:96] found id: ""
	I1222 01:42:28.438605 1685746 logs.go:282] 0 containers: []
	W1222 01:42:28.438613 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:42:28.438620 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:42:28.438685 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:42:28.468458 1685746 cri.go:96] found id: ""
	I1222 01:42:28.468484 1685746 logs.go:282] 0 containers: []
	W1222 01:42:28.468493 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:42:28.468500 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:42:28.468565 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:42:28.493207 1685746 cri.go:96] found id: ""
	I1222 01:42:28.493231 1685746 logs.go:282] 0 containers: []
	W1222 01:42:28.493239 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:42:28.493249 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:42:28.493260 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:42:28.547741 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:42:28.547777 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:42:28.562578 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:42:28.562608 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:42:28.637227 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:42:28.629669    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:28.630219    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:28.631680    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:28.632052    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:28.633501    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:42:28.629669    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:28.630219    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:28.631680    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:28.632052    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:28.633501    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:42:28.637250 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:42:28.637263 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:42:28.662593 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:42:28.662632 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:42:31.190941 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:42:31.202783 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:42:31.202858 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:42:31.227601 1685746 cri.go:96] found id: ""
	I1222 01:42:31.227625 1685746 logs.go:282] 0 containers: []
	W1222 01:42:31.227633 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:42:31.227642 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:42:31.227718 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:42:31.267011 1685746 cri.go:96] found id: ""
	I1222 01:42:31.267040 1685746 logs.go:282] 0 containers: []
	W1222 01:42:31.267049 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:42:31.267056 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:42:31.267118 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:42:31.363207 1685746 cri.go:96] found id: ""
	I1222 01:42:31.363231 1685746 logs.go:282] 0 containers: []
	W1222 01:42:31.363239 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:42:31.363246 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:42:31.363320 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:42:31.412753 1685746 cri.go:96] found id: ""
	I1222 01:42:31.412780 1685746 logs.go:282] 0 containers: []
	W1222 01:42:31.412788 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:42:31.412796 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:42:31.412858 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:42:31.453115 1685746 cri.go:96] found id: ""
	I1222 01:42:31.453145 1685746 logs.go:282] 0 containers: []
	W1222 01:42:31.453154 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:42:31.453167 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:42:31.453225 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:42:31.492529 1685746 cri.go:96] found id: ""
	I1222 01:42:31.492550 1685746 logs.go:282] 0 containers: []
	W1222 01:42:31.492558 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:42:31.492565 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:42:31.492621 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:42:31.529156 1685746 cri.go:96] found id: ""
	I1222 01:42:31.529179 1685746 logs.go:282] 0 containers: []
	W1222 01:42:31.529187 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:42:31.529193 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:42:31.529252 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:42:31.561255 1685746 cri.go:96] found id: ""
	I1222 01:42:31.561283 1685746 logs.go:282] 0 containers: []
	W1222 01:42:31.561292 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:42:31.561301 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:42:31.561314 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:42:31.622500 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:42:31.622526 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 22 01:36:29 no-preload-154186 containerd[555]: time="2025-12-22T01:36:29.325328676Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 22 01:36:29 no-preload-154186 containerd[555]: time="2025-12-22T01:36:29.325350666Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 22 01:36:29 no-preload-154186 containerd[555]: time="2025-12-22T01:36:29.325391659Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 22 01:36:29 no-preload-154186 containerd[555]: time="2025-12-22T01:36:29.325406576Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 22 01:36:29 no-preload-154186 containerd[555]: time="2025-12-22T01:36:29.325416947Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 22 01:36:29 no-preload-154186 containerd[555]: time="2025-12-22T01:36:29.325430043Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 22 01:36:29 no-preload-154186 containerd[555]: time="2025-12-22T01:36:29.325439348Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 22 01:36:29 no-preload-154186 containerd[555]: time="2025-12-22T01:36:29.325458006Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 22 01:36:29 no-preload-154186 containerd[555]: time="2025-12-22T01:36:29.325472513Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 22 01:36:29 no-preload-154186 containerd[555]: time="2025-12-22T01:36:29.325505277Z" level=info msg="Connect containerd service"
	Dec 22 01:36:29 no-preload-154186 containerd[555]: time="2025-12-22T01:36:29.325765062Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 22 01:36:29 no-preload-154186 containerd[555]: time="2025-12-22T01:36:29.326389970Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 22 01:36:29 no-preload-154186 containerd[555]: time="2025-12-22T01:36:29.344703104Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 22 01:36:29 no-preload-154186 containerd[555]: time="2025-12-22T01:36:29.344887163Z" level=info msg="Start subscribing containerd event"
	Dec 22 01:36:29 no-preload-154186 containerd[555]: time="2025-12-22T01:36:29.345002741Z" level=info msg="Start recovering state"
	Dec 22 01:36:29 no-preload-154186 containerd[555]: time="2025-12-22T01:36:29.344959344Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 22 01:36:29 no-preload-154186 containerd[555]: time="2025-12-22T01:36:29.364047164Z" level=info msg="Start event monitor"
	Dec 22 01:36:29 no-preload-154186 containerd[555]: time="2025-12-22T01:36:29.364252630Z" level=info msg="Start cni network conf syncer for default"
	Dec 22 01:36:29 no-preload-154186 containerd[555]: time="2025-12-22T01:36:29.364336438Z" level=info msg="Start streaming server"
	Dec 22 01:36:29 no-preload-154186 containerd[555]: time="2025-12-22T01:36:29.364415274Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 22 01:36:29 no-preload-154186 containerd[555]: time="2025-12-22T01:36:29.364479832Z" level=info msg="runtime interface starting up..."
	Dec 22 01:36:29 no-preload-154186 containerd[555]: time="2025-12-22T01:36:29.364536588Z" level=info msg="starting plugins..."
	Dec 22 01:36:29 no-preload-154186 containerd[555]: time="2025-12-22T01:36:29.364617967Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 22 01:36:29 no-preload-154186 containerd[555]: time="2025-12-22T01:36:29.364818765Z" level=info msg="containerd successfully booted in 0.063428s"
	Dec 22 01:36:29 no-preload-154186 systemd[1]: Started containerd.service - containerd container runtime.
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:42:32.863895    3920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:32.864477    3920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:32.866128    3920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:32.866664    3920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:32.868219    3920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec22 00:03] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 01:42:32 up 1 day,  8:25,  0 user,  load average: 0.46, 0.76, 1.42
	Linux no-preload-154186 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 22 01:42:29 no-preload-154186 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 01:42:30 no-preload-154186 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 480.
	Dec 22 01:42:30 no-preload-154186 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:42:30 no-preload-154186 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:42:30 no-preload-154186 kubelet[3800]: E1222 01:42:30.282301    3800 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 01:42:30 no-preload-154186 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 01:42:30 no-preload-154186 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 01:42:30 no-preload-154186 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 481.
	Dec 22 01:42:30 no-preload-154186 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:42:30 no-preload-154186 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:42:31 no-preload-154186 kubelet[3805]: E1222 01:42:31.043125    3805 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 01:42:31 no-preload-154186 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 01:42:31 no-preload-154186 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 01:42:31 no-preload-154186 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 482.
	Dec 22 01:42:31 no-preload-154186 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:42:31 no-preload-154186 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:42:31 no-preload-154186 kubelet[3823]: E1222 01:42:31.891441    3823 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 01:42:31 no-preload-154186 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 01:42:31 no-preload-154186 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 01:42:32 no-preload-154186 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 483.
	Dec 22 01:42:32 no-preload-154186 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:42:32 no-preload-154186 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:42:32 no-preload-154186 kubelet[3911]: E1222 01:42:32.825930    3911 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 01:42:32 no-preload-154186 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 01:42:32 no-preload-154186 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-154186 -n no-preload-154186
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-154186 -n no-preload-154186: exit status 2 (329.607155ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "no-preload-154186" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (370.75s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (97.77s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-869293 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1222 01:38:07.826107 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-722318/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-869293 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m36.207065061s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_2.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-869293 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-869293
helpers_test.go:244: (dbg) docker inspect newest-cni-869293:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "05e1fe12904ba3e2f69bb80f55f1eaac2e2f59b2b380c077c133943a7ff0a16e",
	        "Created": "2025-12-22T01:28:35.561963158Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1671292,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-22T01:28:35.62747581Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:065a636b8735485f57df2b02ed6532902f189a9c5dc304ae0ae68a778e1c9b2c",
	        "ResolvConfPath": "/var/lib/docker/containers/05e1fe12904ba3e2f69bb80f55f1eaac2e2f59b2b380c077c133943a7ff0a16e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/05e1fe12904ba3e2f69bb80f55f1eaac2e2f59b2b380c077c133943a7ff0a16e/hostname",
	        "HostsPath": "/var/lib/docker/containers/05e1fe12904ba3e2f69bb80f55f1eaac2e2f59b2b380c077c133943a7ff0a16e/hosts",
	        "LogPath": "/var/lib/docker/containers/05e1fe12904ba3e2f69bb80f55f1eaac2e2f59b2b380c077c133943a7ff0a16e/05e1fe12904ba3e2f69bb80f55f1eaac2e2f59b2b380c077c133943a7ff0a16e-json.log",
	        "Name": "/newest-cni-869293",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-869293:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-869293",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "05e1fe12904ba3e2f69bb80f55f1eaac2e2f59b2b380c077c133943a7ff0a16e",
	                "LowerDir": "/var/lib/docker/overlay2/158fbaf3ed5d82f864f18e0e961f02de684f670ee24faf71f1ec33887e356946-init/diff:/var/lib/docker/overlay2/fdfc0dccbb3b6766b7981a5669907f62e120bbe774767ccbee34e6115374625f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/158fbaf3ed5d82f864f18e0e961f02de684f670ee24faf71f1ec33887e356946/merged",
	                "UpperDir": "/var/lib/docker/overlay2/158fbaf3ed5d82f864f18e0e961f02de684f670ee24faf71f1ec33887e356946/diff",
	                "WorkDir": "/var/lib/docker/overlay2/158fbaf3ed5d82f864f18e0e961f02de684f670ee24faf71f1ec33887e356946/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-869293",
	                "Source": "/var/lib/docker/volumes/newest-cni-869293/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-869293",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-869293",
	                "name.minikube.sigs.k8s.io": "newest-cni-869293",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "edd3d6fd3544b1c59cd2b427c94606af7bf1f69297eb5ee2ee5ccea43b72aa42",
	            "SandboxKey": "/var/run/docker/netns/edd3d6fd3544",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38695"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38697"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38701"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38699"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38700"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-869293": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9a:ea:31:73:c7:03",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "237b6ac5b33ea8f647685859c16cf161283b5f3d52eea65816f2e7dfeb4ec191",
	                    "EndpointID": "c502bf347220d543d3dcc62fde9abce756967f8038246c4b47be420a228be076",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-869293",
	                        "05e1fe12904b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-869293 -n newest-cni-869293
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-869293 -n newest-cni-869293: exit status 6 (307.880555ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1222 01:38:28.903603 1685221 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-869293" does not appear in /home/jenkins/minikube-integration/22179-1395000/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-869293 logs -n 25
helpers_test.go:261: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───
──────────────────┐
	│ COMMAND │                                                                                                                           ARGS                                                                                                                           │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───
──────────────────┤
	│ delete  │ -p old-k8s-version-433815                                                                                                                                                                                                                                │ old-k8s-version-433815       │ jenkins │ v1.37.0 │ 22 Dec 25 01:25 UTC │ 22 Dec 25 01:25 UTC │
	│ delete  │ -p old-k8s-version-433815                                                                                                                                                                                                                                │ old-k8s-version-433815       │ jenkins │ v1.37.0 │ 22 Dec 25 01:25 UTC │ 22 Dec 25 01:25 UTC │
	│ start   │ -p embed-certs-980842 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                                             │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:25 UTC │ 22 Dec 25 01:26 UTC │
	│ image   │ default-k8s-diff-port-778490 image list --format=json                                                                                                                                                                                                    │ default-k8s-diff-port-778490 │ jenkins │ v1.37.0 │ 22 Dec 25 01:25 UTC │ 22 Dec 25 01:25 UTC │
	│ pause   │ -p default-k8s-diff-port-778490 --alsologtostderr -v=1                                                                                                                                                                                                   │ default-k8s-diff-port-778490 │ jenkins │ v1.37.0 │ 22 Dec 25 01:25 UTC │ 22 Dec 25 01:25 UTC │
	│ unpause │ -p default-k8s-diff-port-778490 --alsologtostderr -v=1                                                                                                                                                                                                   │ default-k8s-diff-port-778490 │ jenkins │ v1.37.0 │ 22 Dec 25 01:25 UTC │ 22 Dec 25 01:25 UTC │
	│ delete  │ -p default-k8s-diff-port-778490                                                                                                                                                                                                                          │ default-k8s-diff-port-778490 │ jenkins │ v1.37.0 │ 22 Dec 25 01:25 UTC │ 22 Dec 25 01:26 UTC │
	│ delete  │ -p default-k8s-diff-port-778490                                                                                                                                                                                                                          │ default-k8s-diff-port-778490 │ jenkins │ v1.37.0 │ 22 Dec 25 01:26 UTC │ 22 Dec 25 01:26 UTC │
	│ delete  │ -p disable-driver-mounts-459348                                                                                                                                                                                                                          │ disable-driver-mounts-459348 │ jenkins │ v1.37.0 │ 22 Dec 25 01:26 UTC │ 22 Dec 25 01:26 UTC │
	│ start   │ -p no-preload-154186 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-154186            │ jenkins │ v1.37.0 │ 22 Dec 25 01:26 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-980842 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                 │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:27 UTC │ 22 Dec 25 01:27 UTC │
	│ stop    │ -p embed-certs-980842 --alsologtostderr -v=3                                                                                                                                                                                                             │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:27 UTC │ 22 Dec 25 01:27 UTC │
	│ addons  │ enable dashboard -p embed-certs-980842 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                            │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:27 UTC │ 22 Dec 25 01:27 UTC │
	│ start   │ -p embed-certs-980842 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                                             │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:27 UTC │ 22 Dec 25 01:28 UTC │
	│ image   │ embed-certs-980842 image list --format=json                                                                                                                                                                                                              │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:28 UTC │ 22 Dec 25 01:28 UTC │
	│ pause   │ -p embed-certs-980842 --alsologtostderr -v=1                                                                                                                                                                                                             │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:28 UTC │ 22 Dec 25 01:28 UTC │
	│ unpause │ -p embed-certs-980842 --alsologtostderr -v=1                                                                                                                                                                                                             │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:28 UTC │ 22 Dec 25 01:28 UTC │
	│ delete  │ -p embed-certs-980842                                                                                                                                                                                                                                    │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:28 UTC │ 22 Dec 25 01:28 UTC │
	│ delete  │ -p embed-certs-980842                                                                                                                                                                                                                                    │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:28 UTC │ 22 Dec 25 01:28 UTC │
	│ start   │ -p newest-cni-869293 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ newest-cni-869293            │ jenkins │ v1.37.0 │ 22 Dec 25 01:28 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-154186 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                  │ no-preload-154186            │ jenkins │ v1.37.0 │ 22 Dec 25 01:34 UTC │                     │
	│ stop    │ -p no-preload-154186 --alsologtostderr -v=3                                                                                                                                                                                                              │ no-preload-154186            │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │ 22 Dec 25 01:36 UTC │
	│ addons  │ enable dashboard -p no-preload-154186 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                             │ no-preload-154186            │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │ 22 Dec 25 01:36 UTC │
	│ start   │ -p no-preload-154186 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-154186            │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-869293 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                  │ newest-cni-869293            │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───
──────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/22 01:36:23
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1222 01:36:23.234823 1681323 out.go:360] Setting OutFile to fd 1 ...
	I1222 01:36:23.235027 1681323 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:36:23.235049 1681323 out.go:374] Setting ErrFile to fd 2...
	I1222 01:36:23.235067 1681323 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:36:23.235421 1681323 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1395000/.minikube/bin
	I1222 01:36:23.235901 1681323 out.go:368] Setting JSON to false
	I1222 01:36:23.237129 1681323 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":116336,"bootTime":1766251047,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1222 01:36:23.237207 1681323 start.go:143] virtualization:  
	I1222 01:36:23.240218 1681323 out.go:179] * [no-preload-154186] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1222 01:36:23.244197 1681323 out.go:179]   - MINIKUBE_LOCATION=22179
	I1222 01:36:23.244262 1681323 notify.go:221] Checking for updates...
	I1222 01:36:23.247308 1681323 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 01:36:23.251437 1681323 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-1395000/kubeconfig
	I1222 01:36:23.254483 1681323 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1395000/.minikube
	I1222 01:36:23.257414 1681323 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1222 01:36:23.260441 1681323 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 01:36:23.264003 1681323 config.go:182] Loaded profile config "no-preload-154186": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1222 01:36:23.264837 1681323 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 01:36:23.295171 1681323 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1222 01:36:23.295305 1681323 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:36:23.350537 1681323 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 01:36:23.341149383 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:36:23.350648 1681323 docker.go:319] overlay module found
	I1222 01:36:23.353847 1681323 out.go:179] * Using the docker driver based on existing profile
	I1222 01:36:23.356749 1681323 start.go:309] selected driver: docker
	I1222 01:36:23.356777 1681323 start.go:928] validating driver "docker" against &{Name:no-preload-154186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-154186 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26214
4 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:36:23.356883 1681323 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 01:36:23.357613 1681323 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:36:23.417124 1681323 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 01:36:23.408084515 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:36:23.417456 1681323 start_flags.go:995] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1222 01:36:23.417483 1681323 cni.go:84] Creating CNI manager for ""
	I1222 01:36:23.417540 1681323 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1222 01:36:23.417589 1681323 start.go:353] cluster config:
	{Name:no-preload-154186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-154186 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:36:23.420666 1681323 out.go:179] * Starting "no-preload-154186" primary control-plane node in "no-preload-154186" cluster
	I1222 01:36:23.423625 1681323 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1222 01:36:23.426661 1681323 out.go:179] * Pulling base image v0.0.48-1766219634-22260 ...
	I1222 01:36:23.429673 1681323 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1222 01:36:23.429835 1681323 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/no-preload-154186/config.json ...
	I1222 01:36:23.430154 1681323 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1222 01:36:23.430238 1681323 cache.go:107] acquiring lock: {Name:mk3bde21e751b3aa3caf7a41c8a37e36cec6e7cf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:36:23.430340 1681323 cache.go:115] /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1222 01:36:23.430349 1681323 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 122.997µs
	I1222 01:36:23.430379 1681323 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1222 01:36:23.430401 1681323 cache.go:107] acquiring lock: {Name:mk4a15c8225bf94a78b514d4142ea41c6bb91faa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:36:23.430458 1681323 cache.go:115] /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 exists
	I1222 01:36:23.430472 1681323 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1" took 72.633µs
	I1222 01:36:23.430491 1681323 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 succeeded
	I1222 01:36:23.430523 1681323 cache.go:107] acquiring lock: {Name:mkeb24b7f997eb1a1a3d59e2a2d68597fffc7c36 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:36:23.430589 1681323 cache.go:115] /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 exists
	I1222 01:36:23.430602 1681323 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1" took 94.27µs
	I1222 01:36:23.430610 1681323 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 succeeded
	I1222 01:36:23.430636 1681323 cache.go:107] acquiring lock: {Name:mkf2939c17635a47347d3721871a718b69a7a19c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:36:23.430687 1681323 cache.go:115] /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 exists
	I1222 01:36:23.430709 1681323 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1" took 74.782µs
	I1222 01:36:23.430717 1681323 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 succeeded
	I1222 01:36:23.430735 1681323 cache.go:107] acquiring lock: {Name:mk1daf2f1163a462fd1f82e12b9d4b157cffc772 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:36:23.430785 1681323 cache.go:115] /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 exists
	I1222 01:36:23.430802 1681323 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1" took 62.638µs
	I1222 01:36:23.430824 1681323 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 succeeded
	I1222 01:36:23.430840 1681323 cache.go:107] acquiring lock: {Name:mk48171dacff6bbfb8016f0e5908022e81e1ea85 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:36:23.430924 1681323 cache.go:115] /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1222 01:36:23.430937 1681323 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 103.344µs
	I1222 01:36:23.430969 1681323 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1222 01:36:23.431003 1681323 cache.go:107] acquiring lock: {Name:mkc08548a3ab9782a3dcbbb4e211790535cb9d14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:36:23.431057 1681323 cache.go:115] /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 exists
	I1222 01:36:23.431070 1681323 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0" took 69.399µs
	I1222 01:36:23.431089 1681323 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1222 01:36:23.431107 1681323 cache.go:107] acquiring lock: {Name:mk2f653a9914a185aaa3299c67a548da6098dcf3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:36:23.431143 1681323 cache.go:115] /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1222 01:36:23.431164 1681323 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 48.804µs
	I1222 01:36:23.431176 1681323 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1222 01:36:23.431183 1681323 cache.go:87] Successfully saved all images to host disk.
	I1222 01:36:23.450810 1681323 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon, skipping pull
	I1222 01:36:23.450833 1681323 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 exists in daemon, skipping load
	I1222 01:36:23.450848 1681323 cache.go:243] Successfully downloaded all kic artifacts
	I1222 01:36:23.450878 1681323 start.go:360] acquireMachinesLock for no-preload-154186: {Name:mk9dee4f9b1c44d5e40729915965cd9e314df88f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:36:23.450936 1681323 start.go:364] duration metric: took 37.506µs to acquireMachinesLock for "no-preload-154186"
	I1222 01:36:23.450961 1681323 start.go:96] Skipping create...Using existing machine configuration
	I1222 01:36:23.450970 1681323 fix.go:54] fixHost starting: 
	I1222 01:36:23.451228 1681323 cli_runner.go:164] Run: docker container inspect no-preload-154186 --format={{.State.Status}}
	I1222 01:36:23.468570 1681323 fix.go:112] recreateIfNeeded on no-preload-154186: state=Stopped err=<nil>
	W1222 01:36:23.468607 1681323 fix.go:138] unexpected machine state, will restart: <nil>
	I1222 01:36:23.472031 1681323 out.go:252] * Restarting existing docker container for "no-preload-154186" ...
	I1222 01:36:23.472128 1681323 cli_runner.go:164] Run: docker start no-preload-154186
	I1222 01:36:23.751686 1681323 cli_runner.go:164] Run: docker container inspect no-preload-154186 --format={{.State.Status}}
	I1222 01:36:23.775088 1681323 kic.go:430] container "no-preload-154186" state is running.
	I1222 01:36:23.775522 1681323 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-154186
	I1222 01:36:23.804788 1681323 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/no-preload-154186/config.json ...
	I1222 01:36:23.805037 1681323 machine.go:94] provisionDockerMachine start ...
	I1222 01:36:23.805105 1681323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-154186
	I1222 01:36:23.831796 1681323 main.go:144] libmachine: Using SSH client type: native
	I1222 01:36:23.832139 1681323 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38702 <nil> <nil>}
	I1222 01:36:23.832149 1681323 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 01:36:23.834213 1681323 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1222 01:36:26.965689 1681323 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-154186
	
	I1222 01:36:26.965717 1681323 ubuntu.go:182] provisioning hostname "no-preload-154186"
	I1222 01:36:26.965785 1681323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-154186
	I1222 01:36:26.985217 1681323 main.go:144] libmachine: Using SSH client type: native
	I1222 01:36:26.985542 1681323 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38702 <nil> <nil>}
	I1222 01:36:26.985560 1681323 main.go:144] libmachine: About to run SSH command:
	sudo hostname no-preload-154186 && echo "no-preload-154186" | sudo tee /etc/hostname
	I1222 01:36:27.127502 1681323 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-154186
	
	I1222 01:36:27.127590 1681323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-154186
	I1222 01:36:27.145587 1681323 main.go:144] libmachine: Using SSH client type: native
	I1222 01:36:27.145900 1681323 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38702 <nil> <nil>}
	I1222 01:36:27.145916 1681323 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-154186' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-154186/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-154186' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1222 01:36:27.278718 1681323 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 01:36:27.278747 1681323 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22179-1395000/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-1395000/.minikube}
	I1222 01:36:27.278768 1681323 ubuntu.go:190] setting up certificates
	I1222 01:36:27.278786 1681323 provision.go:84] configureAuth start
	I1222 01:36:27.278873 1681323 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-154186
	I1222 01:36:27.301231 1681323 provision.go:143] copyHostCerts
	I1222 01:36:27.301308 1681323 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.pem, removing ...
	I1222 01:36:27.301328 1681323 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.pem
	I1222 01:36:27.301409 1681323 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.pem (1082 bytes)
	I1222 01:36:27.301556 1681323 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1395000/.minikube/cert.pem, removing ...
	I1222 01:36:27.301569 1681323 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1395000/.minikube/cert.pem
	I1222 01:36:27.301598 1681323 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-1395000/.minikube/cert.pem (1123 bytes)
	I1222 01:36:27.301659 1681323 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1395000/.minikube/key.pem, removing ...
	I1222 01:36:27.301669 1681323 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1395000/.minikube/key.pem
	I1222 01:36:27.301695 1681323 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-1395000/.minikube/key.pem (1679 bytes)
	I1222 01:36:27.301746 1681323 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca-key.pem org=jenkins.no-preload-154186 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-154186]
	I1222 01:36:27.754512 1681323 provision.go:177] copyRemoteCerts
	I1222 01:36:27.754594 1681323 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1222 01:36:27.754648 1681323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-154186
	I1222 01:36:27.772550 1681323 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38702 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/no-preload-154186/id_rsa Username:docker}
	I1222 01:36:27.874202 1681323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1222 01:36:27.892571 1681323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1222 01:36:27.911007 1681323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1222 01:36:27.928834 1681323 provision.go:87] duration metric: took 650.003977ms to configureAuth
	I1222 01:36:27.928863 1681323 ubuntu.go:206] setting minikube options for container-runtime
	I1222 01:36:27.929086 1681323 config.go:182] Loaded profile config "no-preload-154186": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1222 01:36:27.929099 1681323 machine.go:97] duration metric: took 4.124054244s to provisionDockerMachine
	I1222 01:36:27.929107 1681323 start.go:293] postStartSetup for "no-preload-154186" (driver="docker")
	I1222 01:36:27.929119 1681323 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1222 01:36:27.929165 1681323 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1222 01:36:27.929208 1681323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-154186
	I1222 01:36:27.946963 1681323 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38702 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/no-preload-154186/id_rsa Username:docker}
	I1222 01:36:28.042660 1681323 ssh_runner.go:195] Run: cat /etc/os-release
	I1222 01:36:28.046171 1681323 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1222 01:36:28.046204 1681323 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1222 01:36:28.046222 1681323 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1395000/.minikube/addons for local assets ...
	I1222 01:36:28.046287 1681323 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1395000/.minikube/files for local assets ...
	I1222 01:36:28.046377 1681323 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem -> 13968642.pem in /etc/ssl/certs
	I1222 01:36:28.046485 1681323 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1222 01:36:28.054291 1681323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem --> /etc/ssl/certs/13968642.pem (1708 bytes)
	I1222 01:36:28.073024 1681323 start.go:296] duration metric: took 143.901056ms for postStartSetup
	I1222 01:36:28.073108 1681323 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 01:36:28.073167 1681323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-154186
	I1222 01:36:28.091267 1681323 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38702 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/no-preload-154186/id_rsa Username:docker}
	I1222 01:36:28.183597 1681323 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1222 01:36:28.188658 1681323 fix.go:56] duration metric: took 4.737681885s for fixHost
	I1222 01:36:28.188687 1681323 start.go:83] releasing machines lock for "no-preload-154186", held for 4.737736532s
	I1222 01:36:28.188793 1681323 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-154186
	I1222 01:36:28.206039 1681323 ssh_runner.go:195] Run: cat /version.json
	I1222 01:36:28.206158 1681323 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1222 01:36:28.206221 1681323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-154186
	I1222 01:36:28.206378 1681323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-154186
	I1222 01:36:28.224770 1681323 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38702 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/no-preload-154186/id_rsa Username:docker}
	I1222 01:36:28.230258 1681323 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38702 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/no-preload-154186/id_rsa Username:docker}
	I1222 01:36:28.414932 1681323 ssh_runner.go:195] Run: systemctl --version
	I1222 01:36:28.421366 1681323 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1222 01:36:28.425653 1681323 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1222 01:36:28.425721 1681323 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1222 01:36:28.433525 1681323 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1222 01:36:28.433549 1681323 start.go:496] detecting cgroup driver to use...
	I1222 01:36:28.433582 1681323 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 01:36:28.433651 1681323 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1222 01:36:28.451333 1681323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1222 01:36:28.464888 1681323 docker.go:218] disabling cri-docker service (if available) ...
	I1222 01:36:28.464974 1681323 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1222 01:36:28.480732 1681323 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1222 01:36:28.494042 1681323 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1222 01:36:28.611667 1681323 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1222 01:36:28.731604 1681323 docker.go:234] disabling docker service ...
	I1222 01:36:28.731674 1681323 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1222 01:36:28.747773 1681323 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1222 01:36:28.761732 1681323 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1222 01:36:28.883133 1681323 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1222 01:36:29.013965 1681323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1222 01:36:29.029996 1681323 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 01:36:29.046133 1681323 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1222 01:36:29.056270 1681323 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1222 01:36:29.066036 1681323 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1222 01:36:29.066163 1681323 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1222 01:36:29.075930 1681323 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1222 01:36:29.084710 1681323 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1222 01:36:29.093653 1681323 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1222 01:36:29.102647 1681323 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1222 01:36:29.110826 1681323 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1222 01:36:29.119665 1681323 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1222 01:36:29.128698 1681323 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1222 01:36:29.137543 1681323 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1222 01:36:29.145415 1681323 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1222 01:36:29.153357 1681323 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:36:29.268778 1681323 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1222 01:36:29.366806 1681323 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1222 01:36:29.366878 1681323 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1222 01:36:29.370821 1681323 start.go:564] Will wait 60s for crictl version
	I1222 01:36:29.370889 1681323 ssh_runner.go:195] Run: which crictl
	I1222 01:36:29.374398 1681323 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1222 01:36:29.401622 1681323 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.1
	RuntimeApiVersion:  v1
	I1222 01:36:29.401693 1681323 ssh_runner.go:195] Run: containerd --version
	I1222 01:36:29.425502 1681323 ssh_runner.go:195] Run: containerd --version
	I1222 01:36:29.452207 1681323 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.1 ...
	I1222 01:36:29.455184 1681323 cli_runner.go:164] Run: docker network inspect no-preload-154186 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 01:36:29.471412 1681323 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1222 01:36:29.475195 1681323 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 01:36:29.484943 1681323 kubeadm.go:884] updating cluster {Name:no-preload-154186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-154186 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1222 01:36:29.485070 1681323 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1222 01:36:29.485129 1681323 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 01:36:29.515771 1681323 containerd.go:627] all images are preloaded for containerd runtime.
	I1222 01:36:29.515798 1681323 cache_images.go:86] Images are preloaded, skipping loading
	I1222 01:36:29.515812 1681323 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-rc.1 containerd true true} ...
	I1222 01:36:29.515907 1681323 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-154186 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-154186 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1222 01:36:29.515977 1681323 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
	I1222 01:36:29.544359 1681323 cni.go:84] Creating CNI manager for ""
	I1222 01:36:29.544384 1681323 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1222 01:36:29.544401 1681323 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1222 01:36:29.544424 1681323 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-154186 NodeName:no-preload-154186 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1222 01:36:29.544539 1681323 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-154186"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1222 01:36:29.544615 1681323 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1222 01:36:29.552325 1681323 binaries.go:51] Found k8s binaries, skipping transfer
	I1222 01:36:29.552411 1681323 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1222 01:36:29.560003 1681323 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1222 01:36:29.572789 1681323 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1222 01:36:29.585517 1681323 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1222 01:36:29.599349 1681323 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1222 01:36:29.603106 1681323 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 01:36:29.612969 1681323 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:36:29.733862 1681323 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 01:36:29.752522 1681323 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/no-preload-154186 for IP: 192.168.85.2
	I1222 01:36:29.752545 1681323 certs.go:195] generating shared ca certs ...
	I1222 01:36:29.752562 1681323 certs.go:227] acquiring lock for ca certs: {Name:mk4e1172e73c8d9b926824a39d7e920772302ed7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:36:29.752701 1681323 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.key
	I1222 01:36:29.752747 1681323 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.key
	I1222 01:36:29.752758 1681323 certs.go:257] generating profile certs ...
	I1222 01:36:29.752867 1681323 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/no-preload-154186/client.key
	I1222 01:36:29.752925 1681323 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/no-preload-154186/apiserver.key.e54c24a5
	I1222 01:36:29.752976 1681323 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/no-preload-154186/proxy-client.key
	I1222 01:36:29.753099 1681323 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/1396864.pem (1338 bytes)
	W1222 01:36:29.753135 1681323 certs.go:480] ignoring /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/1396864_empty.pem, impossibly tiny 0 bytes
	I1222 01:36:29.753147 1681323 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca-key.pem (1675 bytes)
	I1222 01:36:29.753174 1681323 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem (1082 bytes)
	I1222 01:36:29.753203 1681323 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem (1123 bytes)
	I1222 01:36:29.753232 1681323 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/key.pem (1679 bytes)
	I1222 01:36:29.753285 1681323 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem (1708 bytes)
	I1222 01:36:29.753910 1681323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1222 01:36:29.782071 1681323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1222 01:36:29.803383 1681323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1222 01:36:29.824019 1681323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1222 01:36:29.845035 1681323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/no-preload-154186/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1222 01:36:29.866115 1681323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/no-preload-154186/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1222 01:36:29.883918 1681323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/no-preload-154186/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1222 01:36:29.900943 1681323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/no-preload-154186/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1222 01:36:29.918714 1681323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1222 01:36:29.936559 1681323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/1396864.pem --> /usr/share/ca-certificates/1396864.pem (1338 bytes)
	I1222 01:36:29.954160 1681323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem --> /usr/share/ca-certificates/13968642.pem (1708 bytes)
	I1222 01:36:29.972189 1681323 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1222 01:36:29.985434 1681323 ssh_runner.go:195] Run: openssl version
	I1222 01:36:29.992444 1681323 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1396864.pem
	I1222 01:36:30.000140 1681323 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1396864.pem /etc/ssl/certs/1396864.pem
	I1222 01:36:30.014964 1681323 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1396864.pem
	I1222 01:36:30.043109 1681323 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 00:13 /usr/share/ca-certificates/1396864.pem
	I1222 01:36:30.043223 1681323 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1396864.pem
	I1222 01:36:30.108760 1681323 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1222 01:36:30.118305 1681323 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/13968642.pem
	I1222 01:36:30.127792 1681323 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/13968642.pem /etc/ssl/certs/13968642.pem
	I1222 01:36:30.136802 1681323 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13968642.pem
	I1222 01:36:30.141548 1681323 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 00:13 /usr/share/ca-certificates/13968642.pem
	I1222 01:36:30.141643 1681323 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13968642.pem
	I1222 01:36:30.184623 1681323 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1222 01:36:30.193382 1681323 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:36:30.201724 1681323 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1222 01:36:30.210242 1681323 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:36:30.214881 1681323 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 00:04 /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:36:30.214969 1681323 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:36:30.256748 1681323 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1222 01:36:30.264842 1681323 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 01:36:30.268912 1681323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1222 01:36:30.310683 1681323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1222 01:36:30.352386 1681323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1222 01:36:30.393519 1681323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1222 01:36:30.434377 1681323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1222 01:36:30.475355 1681323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1222 01:36:30.540692 1681323 kubeadm.go:401] StartCluster: {Name:no-preload-154186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-154186 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:36:30.540782 1681323 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1222 01:36:30.540866 1681323 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 01:36:30.575229 1681323 cri.go:96] found id: ""
	I1222 01:36:30.575312 1681323 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1222 01:36:30.584220 1681323 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1222 01:36:30.584293 1681323 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1222 01:36:30.584391 1681323 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1222 01:36:30.594816 1681323 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1222 01:36:30.595221 1681323 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-154186" does not appear in /home/jenkins/minikube-integration/22179-1395000/kubeconfig
	I1222 01:36:30.595322 1681323 kubeconfig.go:62] /home/jenkins/minikube-integration/22179-1395000/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-154186" cluster setting kubeconfig missing "no-preload-154186" context setting]
	I1222 01:36:30.595620 1681323 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/kubeconfig: {Name:mk08a9ce05fb3d2aec66c2d4776a38c1f64c42df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:36:30.596925 1681323 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1222 01:36:30.604842 1681323 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1222 01:36:30.604873 1681323 kubeadm.go:602] duration metric: took 20.560605ms to restartPrimaryControlPlane
	I1222 01:36:30.604883 1681323 kubeadm.go:403] duration metric: took 64.203267ms to StartCluster
	I1222 01:36:30.604898 1681323 settings.go:142] acquiring lock: {Name:mk10fbc51e5384d725fd46ab36048358281cb87b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:36:30.604963 1681323 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22179-1395000/kubeconfig
	I1222 01:36:30.605576 1681323 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/kubeconfig: {Name:mk08a9ce05fb3d2aec66c2d4776a38c1f64c42df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:36:30.605779 1681323 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1222 01:36:30.606072 1681323 config.go:182] Loaded profile config "no-preload-154186": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1222 01:36:30.606145 1681323 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1222 01:36:30.606208 1681323 addons.go:70] Setting storage-provisioner=true in profile "no-preload-154186"
	I1222 01:36:30.606221 1681323 addons.go:239] Setting addon storage-provisioner=true in "no-preload-154186"
	I1222 01:36:30.606247 1681323 host.go:66] Checking if "no-preload-154186" exists ...
	I1222 01:36:30.606455 1681323 addons.go:70] Setting dashboard=true in profile "no-preload-154186"
	I1222 01:36:30.606480 1681323 addons.go:239] Setting addon dashboard=true in "no-preload-154186"
	W1222 01:36:30.606487 1681323 addons.go:248] addon dashboard should already be in state true
	I1222 01:36:30.606508 1681323 host.go:66] Checking if "no-preload-154186" exists ...
	I1222 01:36:30.606709 1681323 cli_runner.go:164] Run: docker container inspect no-preload-154186 --format={{.State.Status}}
	I1222 01:36:30.606923 1681323 cli_runner.go:164] Run: docker container inspect no-preload-154186 --format={{.State.Status}}
	I1222 01:36:30.609388 1681323 addons.go:70] Setting default-storageclass=true in profile "no-preload-154186"
	I1222 01:36:30.609534 1681323 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-154186"
	I1222 01:36:30.610760 1681323 cli_runner.go:164] Run: docker container inspect no-preload-154186 --format={{.State.Status}}
	I1222 01:36:30.611641 1681323 out.go:179] * Verifying Kubernetes components...
	I1222 01:36:30.614570 1681323 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:36:30.635770 1681323 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1222 01:36:30.638688 1681323 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 01:36:30.638712 1681323 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1222 01:36:30.638781 1681323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-154186
	I1222 01:36:30.665572 1681323 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1222 01:36:30.669104 1681323 addons.go:239] Setting addon default-storageclass=true in "no-preload-154186"
	I1222 01:36:30.669154 1681323 host.go:66] Checking if "no-preload-154186" exists ...
	I1222 01:36:30.669590 1681323 cli_runner.go:164] Run: docker container inspect no-preload-154186 --format={{.State.Status}}
	I1222 01:36:30.683959 1681323 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1222 01:36:30.687520 1681323 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1222 01:36:30.687549 1681323 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1222 01:36:30.687626 1681323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-154186
	I1222 01:36:30.694403 1681323 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38702 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/no-preload-154186/id_rsa Username:docker}
	I1222 01:36:30.702255 1681323 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1222 01:36:30.702278 1681323 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1222 01:36:30.702352 1681323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-154186
	I1222 01:36:30.734213 1681323 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38702 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/no-preload-154186/id_rsa Username:docker}
	I1222 01:36:30.746998 1681323 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38702 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/no-preload-154186/id_rsa Username:docker}
	I1222 01:36:30.831368 1681323 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 01:36:30.859591 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 01:36:30.874776 1681323 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1222 01:36:30.874854 1681323 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1222 01:36:30.886571 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1222 01:36:30.896408 1681323 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1222 01:36:30.896491 1681323 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1222 01:36:30.935406 1681323 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1222 01:36:30.935480 1681323 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1222 01:36:30.980952 1681323 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1222 01:36:30.980974 1681323 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1222 01:36:30.995662 1681323 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1222 01:36:30.995686 1681323 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1222 01:36:31.011181 1681323 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1222 01:36:31.011207 1681323 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1222 01:36:31.025817 1681323 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1222 01:36:31.025897 1681323 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1222 01:36:31.040425 1681323 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1222 01:36:31.040451 1681323 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1222 01:36:31.053847 1681323 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1222 01:36:31.053877 1681323 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1222 01:36:31.068203 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1222 01:36:31.247341 1681323 node_ready.go:35] waiting up to 6m0s for node "no-preload-154186" to be "Ready" ...
	W1222 01:36:31.247593 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:36:31.247631 1681323 retry.go:84] will retry after 300ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:36:31.247506 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:36:31.247875 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:36:31.447386 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:36:31.513087 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:36:31.526286 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 01:36:31.572817 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:36:31.587392 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:36:31.642567 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:36:31.848456 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:36:31.905960 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:36:32.073298 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:36:32.132496 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:36:32.203849 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:36:32.270984 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:36:32.342424 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:36:32.407926 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:36:32.631301 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:36:32.690048 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:36:32.962216 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:36:33.025131 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:36:33.248294 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:36:33.408735 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:36:33.475146 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:36:33.554384 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1222 01:36:33.564336 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:36:33.639233 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:36:33.648250 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:36:34.651164 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:36:34.715118 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:36:34.728333 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1222 01:36:34.766376 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:36:34.808648 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:36:34.845664 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:36:35.248568 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:36:36.090694 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:36:36.156793 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:36:36.271773 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:36:36.333746 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:36:36.520979 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:36:36.615878 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:36:37.748720 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:36:38.179203 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:36:38.240963 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:36:38.510984 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:36:38.571044 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:36:39.749000 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:36:40.373298 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:36:40.435360 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:36:40.473700 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:36:40.535044 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:36:41.925479 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:36:41.983644 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:36:42.248915 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:36:44.748032 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:36:45.973776 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:36:46.041089 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:36:46.041130 1681323 retry.go:84] will retry after 8s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:36:46.497259 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:36:46.561444 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:36:46.749030 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:36:47.516501 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:36:47.578863 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:36:50.255531 1670843 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000628133s
	I1222 01:36:50.255560 1670843 kubeadm.go:319] 
	I1222 01:36:50.255614 1670843 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1222 01:36:50.255656 1670843 kubeadm.go:319] 	- The kubelet is not running
	I1222 01:36:50.255761 1670843 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1222 01:36:50.255770 1670843 kubeadm.go:319] 
	I1222 01:36:50.255874 1670843 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1222 01:36:50.255911 1670843 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1222 01:36:50.255945 1670843 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1222 01:36:50.255956 1670843 kubeadm.go:319] 
	I1222 01:36:50.263125 1670843 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1222 01:36:50.263638 1670843 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1222 01:36:50.263757 1670843 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1222 01:36:50.264047 1670843 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1222 01:36:50.264076 1670843 kubeadm.go:319] 
	I1222 01:36:50.264231 1670843 kubeadm.go:403] duration metric: took 8m5.963674476s to StartCluster
	I1222 01:36:50.264284 1670843 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:36:50.264365 1670843 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:36:50.264448 1670843 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1222 01:36:50.290175 1670843 cri.go:96] found id: ""
	I1222 01:36:50.290209 1670843 logs.go:282] 0 containers: []
	W1222 01:36:50.290218 1670843 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:36:50.290226 1670843 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:36:50.290294 1670843 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:36:50.322949 1670843 cri.go:96] found id: ""
	I1222 01:36:50.322982 1670843 logs.go:282] 0 containers: []
	W1222 01:36:50.322992 1670843 logs.go:284] No container was found matching "etcd"
	I1222 01:36:50.322998 1670843 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:36:50.323057 1670843 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:36:50.348796 1670843 cri.go:96] found id: ""
	I1222 01:36:50.348823 1670843 logs.go:282] 0 containers: []
	W1222 01:36:50.348833 1670843 logs.go:284] No container was found matching "coredns"
	I1222 01:36:50.348839 1670843 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:36:50.348963 1670843 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:36:50.377272 1670843 cri.go:96] found id: ""
	I1222 01:36:50.377300 1670843 logs.go:282] 0 containers: []
	W1222 01:36:50.377309 1670843 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:36:50.377316 1670843 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:36:50.377378 1670843 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:36:50.402189 1670843 cri.go:96] found id: ""
	I1222 01:36:50.402213 1670843 logs.go:282] 0 containers: []
	W1222 01:36:50.402222 1670843 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:36:50.402228 1670843 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:36:50.402290 1670843 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:36:50.426628 1670843 cri.go:96] found id: ""
	I1222 01:36:50.426656 1670843 logs.go:282] 0 containers: []
	W1222 01:36:50.426666 1670843 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:36:50.426674 1670843 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:36:50.426736 1670843 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:36:50.455040 1670843 cri.go:96] found id: ""
	I1222 01:36:50.455066 1670843 logs.go:282] 0 containers: []
	W1222 01:36:50.455076 1670843 logs.go:284] No container was found matching "kindnet"
	I1222 01:36:50.455086 1670843 logs.go:123] Gathering logs for kubelet ...
	I1222 01:36:50.455098 1670843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:36:50.511757 1670843 logs.go:123] Gathering logs for dmesg ...
	I1222 01:36:50.511795 1670843 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:36:50.527148 1670843 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:36:50.527182 1670843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:36:50.590231 1670843 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:36:50.581376    4829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:36:50.582223    4829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:36:50.583755    4829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:36:50.584210    4829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:36:50.585755    4829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:36:50.581376    4829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:36:50.582223    4829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:36:50.583755    4829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:36:50.584210    4829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:36:50.585755    4829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:36:50.590258 1670843 logs.go:123] Gathering logs for containerd ...
	I1222 01:36:50.590270 1670843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:36:50.629028 1670843 logs.go:123] Gathering logs for container status ...
	I1222 01:36:50.629065 1670843 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 01:36:50.657217 1670843 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000628133s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1222 01:36:50.657266 1670843 out.go:285] * 
	W1222 01:36:50.657314 1670843 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000628133s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1222 01:36:50.657331 1670843 out.go:285] * 
	W1222 01:36:50.659448 1670843 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1222 01:36:50.664218 1670843 out.go:203] 
	W1222 01:36:50.668072 1670843 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000628133s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1222 01:36:50.668130 1670843 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1222 01:36:50.668162 1670843 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1222 01:36:50.671361 1670843 out.go:203] 
	W1222 01:36:49.248854 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:36:51.748544 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:36:51.887416 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:36:51.983037 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:36:51.983081 1681323 retry.go:84] will retry after 11.6s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:36:53.748772 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:36:54.034268 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:36:54.095252 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:36:56.248219 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:36:58.248513 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:36:59.627564 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:36:59.686822 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:36:59.686859 1681323 retry.go:84] will retry after 13.5s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:37:00.248760 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:37:02.748124 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:37:03.607385 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:37:03.671489 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:37:03.671533 1681323 retry.go:84] will retry after 9.3s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:37:04.749026 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:37:07.248861 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:37:09.248980 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:37:09.255139 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:37:09.316591 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:37:09.316628 1681323 retry.go:84] will retry after 18s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:37:11.749013 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:37:12.971520 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:37:13.036863 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:37:13.036909 1681323 retry.go:84] will retry after 32.1s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:37:13.196332 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:37:13.286762 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:37:14.248137 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:37:16.248491 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:37:18.748811 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:37:21.248255 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:37:23.748025 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:37:26.248038 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:37:27.347403 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:37:27.407561 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:37:27.407597 1681323 retry.go:84] will retry after 34.7s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:37:28.248450 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:37:30.748265 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:37:33.248411 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:37:35.748090 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:37:37.748168 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:37:39.748842 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:37:42.249134 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:37:43.405621 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:37:43.463716 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:37:43.463758 1681323 retry.go:84] will retry after 31.5s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:37:44.748410 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:37:45.174172 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:37:45.310864 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:37:47.248085 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:37:49.248654 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:37:51.249044 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:37:53.748668 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:37:56.248083 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:37:58.248588 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:38:00.748407 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:38:02.126827 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:38:02.186335 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:38:02.186432 1681323 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1222 01:38:02.748918 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:38:05.248068 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:38:07.248197 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:38:09.249028 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:38:11.748079 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:38:13.748193 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:38:14.875348 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:38:14.937026 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:38:14.937135 1681323 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1222 01:38:14.956170 1681323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:38:15.025663 1681323 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:38:15.025778 1681323 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1222 01:38:15.029889 1681323 out.go:179] * Enabled addons: 
	I1222 01:38:15.035089 1681323 addons.go:530] duration metric: took 1m44.428931378s for enable addons: enabled=[]
	W1222 01:38:16.248009 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:38:18.248678 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:38:20.748366 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:38:23.248345 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:38:25.248576 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:38:27.249028 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 22 01:28:42 newest-cni-869293 containerd[758]: time="2025-12-22T01:28:42.072209044Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 22 01:28:42 newest-cni-869293 containerd[758]: time="2025-12-22T01:28:42.072223001Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 22 01:28:42 newest-cni-869293 containerd[758]: time="2025-12-22T01:28:42.072264175Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 22 01:28:42 newest-cni-869293 containerd[758]: time="2025-12-22T01:28:42.072281972Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 22 01:28:42 newest-cni-869293 containerd[758]: time="2025-12-22T01:28:42.072292934Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 22 01:28:42 newest-cni-869293 containerd[758]: time="2025-12-22T01:28:42.072316770Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 22 01:28:42 newest-cni-869293 containerd[758]: time="2025-12-22T01:28:42.072339031Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 22 01:28:42 newest-cni-869293 containerd[758]: time="2025-12-22T01:28:42.072355876Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 22 01:28:42 newest-cni-869293 containerd[758]: time="2025-12-22T01:28:42.072374076Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 22 01:28:42 newest-cni-869293 containerd[758]: time="2025-12-22T01:28:42.072409957Z" level=info msg="Connect containerd service"
	Dec 22 01:28:42 newest-cni-869293 containerd[758]: time="2025-12-22T01:28:42.072783704Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 22 01:28:42 newest-cni-869293 containerd[758]: time="2025-12-22T01:28:42.073465565Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 22 01:28:42 newest-cni-869293 containerd[758]: time="2025-12-22T01:28:42.089227568Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 22 01:28:42 newest-cni-869293 containerd[758]: time="2025-12-22T01:28:42.089296402Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 22 01:28:42 newest-cni-869293 containerd[758]: time="2025-12-22T01:28:42.089333145Z" level=info msg="Start subscribing containerd event"
	Dec 22 01:28:42 newest-cni-869293 containerd[758]: time="2025-12-22T01:28:42.089383992Z" level=info msg="Start recovering state"
	Dec 22 01:28:42 newest-cni-869293 containerd[758]: time="2025-12-22T01:28:42.142982119Z" level=info msg="Start event monitor"
	Dec 22 01:28:42 newest-cni-869293 containerd[758]: time="2025-12-22T01:28:42.143200181Z" level=info msg="Start cni network conf syncer for default"
	Dec 22 01:28:42 newest-cni-869293 containerd[758]: time="2025-12-22T01:28:42.143267168Z" level=info msg="Start streaming server"
	Dec 22 01:28:42 newest-cni-869293 containerd[758]: time="2025-12-22T01:28:42.143353618Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 22 01:28:42 newest-cni-869293 containerd[758]: time="2025-12-22T01:28:42.143430780Z" level=info msg="runtime interface starting up..."
	Dec 22 01:28:42 newest-cni-869293 containerd[758]: time="2025-12-22T01:28:42.143506456Z" level=info msg="starting plugins..."
	Dec 22 01:28:42 newest-cni-869293 containerd[758]: time="2025-12-22T01:28:42.143586498Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 22 01:28:42 newest-cni-869293 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 22 01:28:42 newest-cni-869293 containerd[758]: time="2025-12-22T01:28:42.147653548Z" level=info msg="containerd successfully booted in 0.098372s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:38:29.566135    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:38:29.566767    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:38:29.568481    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:38:29.568849    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:38:29.570438    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec22 00:03] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 01:38:29 up 1 day,  8:21,  0 user,  load average: 0.64, 0.99, 1.67
	Linux newest-cni-869293 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 22 01:38:26 newest-cni-869293 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 01:38:26 newest-cni-869293 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 448.
	Dec 22 01:38:26 newest-cni-869293 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:38:26 newest-cni-869293 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:38:26 newest-cni-869293 kubelet[5813]: E1222 01:38:26.799480    5813 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 01:38:26 newest-cni-869293 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 01:38:26 newest-cni-869293 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 01:38:27 newest-cni-869293 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 449.
	Dec 22 01:38:27 newest-cni-869293 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:38:27 newest-cni-869293 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:38:27 newest-cni-869293 kubelet[5819]: E1222 01:38:27.555613    5819 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 01:38:27 newest-cni-869293 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 01:38:27 newest-cni-869293 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 01:38:28 newest-cni-869293 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 450.
	Dec 22 01:38:28 newest-cni-869293 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:38:28 newest-cni-869293 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:38:28 newest-cni-869293 kubelet[5824]: E1222 01:38:28.308131    5824 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 01:38:28 newest-cni-869293 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 01:38:28 newest-cni-869293 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 01:38:28 newest-cni-869293 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 451.
	Dec 22 01:38:28 newest-cni-869293 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:38:28 newest-cni-869293 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:38:29 newest-cni-869293 kubelet[5851]: E1222 01:38:29.095028    5851 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 01:38:29 newest-cni-869293 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 01:38:29 newest-cni-869293 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-869293 -n newest-cni-869293
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-869293 -n newest-cni-869293: exit status 6 (425.893687ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1222 01:38:30.131731 1685440 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-869293" does not appear in /home/jenkins/minikube-integration/22179-1395000/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "newest-cni-869293" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (97.77s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (372.05s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-869293 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1
E1222 01:39:13.743006 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/old-k8s-version-433815/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:39:26.767480 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/default-k8s-diff-port-778490/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:40:56.757269 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:41:10.876590 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-722318/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:41:29.153979 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/addons-984861/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p newest-cni-869293 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1: exit status 105 (6m7.217140784s)

                                                
                                                
-- stdout --
	* [newest-cni-869293] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22179
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22179-1395000/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1395000/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "newest-cni-869293" primary control-plane node in "newest-cni-869293" cluster
	* Pulling base image v0.0.48-1766219634-22260 ...
	* Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.1 ...
	  - kubeadm.pod-network-cidr=10.42.0.0/16
	* Verifying Kubernetes components...
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1222 01:38:31.686572 1685746 out.go:360] Setting OutFile to fd 1 ...
	I1222 01:38:31.686782 1685746 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:38:31.686816 1685746 out.go:374] Setting ErrFile to fd 2...
	I1222 01:38:31.686836 1685746 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:38:31.687133 1685746 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1395000/.minikube/bin
	I1222 01:38:31.687563 1685746 out.go:368] Setting JSON to false
	I1222 01:38:31.688584 1685746 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":116465,"bootTime":1766251047,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1222 01:38:31.688686 1685746 start.go:143] virtualization:  
	I1222 01:38:31.691576 1685746 out.go:179] * [newest-cni-869293] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1222 01:38:31.695464 1685746 out.go:179]   - MINIKUBE_LOCATION=22179
	I1222 01:38:31.695552 1685746 notify.go:221] Checking for updates...
	I1222 01:38:31.701535 1685746 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 01:38:31.704637 1685746 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-1395000/kubeconfig
	I1222 01:38:31.707560 1685746 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1395000/.minikube
	I1222 01:38:31.710534 1685746 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1222 01:38:31.713575 1685746 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 01:38:31.717166 1685746 config.go:182] Loaded profile config "newest-cni-869293": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1222 01:38:31.717762 1685746 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 01:38:31.753414 1685746 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1222 01:38:31.753539 1685746 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:38:31.812499 1685746 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 01:38:31.803096079 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:38:31.812613 1685746 docker.go:319] overlay module found
	I1222 01:38:31.815770 1685746 out.go:179] * Using the docker driver based on existing profile
	I1222 01:38:31.818545 1685746 start.go:309] selected driver: docker
	I1222 01:38:31.818566 1685746 start.go:928] validating driver "docker" against &{Name:newest-cni-869293 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-869293 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:38:31.818662 1685746 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 01:38:31.819384 1685746 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:38:31.880587 1685746 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 01:38:31.870819289 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:38:31.880955 1685746 start_flags.go:1014] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1222 01:38:31.880984 1685746 cni.go:84] Creating CNI manager for ""
	I1222 01:38:31.881038 1685746 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1222 01:38:31.881081 1685746 start.go:353] cluster config:
	{Name:newest-cni-869293 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-869293 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:38:31.884279 1685746 out.go:179] * Starting "newest-cni-869293" primary control-plane node in "newest-cni-869293" cluster
	I1222 01:38:31.887056 1685746 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1222 01:38:31.890043 1685746 out.go:179] * Pulling base image v0.0.48-1766219634-22260 ...
	I1222 01:38:31.892868 1685746 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1222 01:38:31.892919 1685746 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4
	I1222 01:38:31.892932 1685746 cache.go:65] Caching tarball of preloaded images
	I1222 01:38:31.892952 1685746 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1222 01:38:31.893022 1685746 preload.go:251] Found /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1222 01:38:31.893039 1685746 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on containerd
	I1222 01:38:31.893153 1685746 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/config.json ...
	I1222 01:38:31.913018 1685746 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon, skipping pull
	I1222 01:38:31.913041 1685746 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 exists in daemon, skipping load
	I1222 01:38:31.913060 1685746 cache.go:243] Successfully downloaded all kic artifacts
	I1222 01:38:31.913090 1685746 start.go:360] acquireMachinesLock for newest-cni-869293: {Name:mke3b6ee6da4cf7fd78c9e3f2e52f52ceafca24c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:38:31.913180 1685746 start.go:364] duration metric: took 44.275µs to acquireMachinesLock for "newest-cni-869293"
	I1222 01:38:31.913204 1685746 start.go:96] Skipping create...Using existing machine configuration
	I1222 01:38:31.913210 1685746 fix.go:54] fixHost starting: 
	I1222 01:38:31.913477 1685746 cli_runner.go:164] Run: docker container inspect newest-cni-869293 --format={{.State.Status}}
	I1222 01:38:31.930780 1685746 fix.go:112] recreateIfNeeded on newest-cni-869293: state=Stopped err=<nil>
	W1222 01:38:31.930815 1685746 fix.go:138] unexpected machine state, will restart: <nil>
	I1222 01:38:31.934050 1685746 out.go:252] * Restarting existing docker container for "newest-cni-869293" ...
	I1222 01:38:31.934152 1685746 cli_runner.go:164] Run: docker start newest-cni-869293
	I1222 01:38:32.204881 1685746 cli_runner.go:164] Run: docker container inspect newest-cni-869293 --format={{.State.Status}}
	I1222 01:38:32.243691 1685746 kic.go:430] container "newest-cni-869293" state is running.
	I1222 01:38:32.244096 1685746 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-869293
	I1222 01:38:32.265947 1685746 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/config.json ...
	I1222 01:38:32.266210 1685746 machine.go:94] provisionDockerMachine start ...
	I1222 01:38:32.266268 1685746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:38:32.293919 1685746 main.go:144] libmachine: Using SSH client type: native
	I1222 01:38:32.294281 1685746 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38707 <nil> <nil>}
	I1222 01:38:32.294292 1685746 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 01:38:32.294932 1685746 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54476->127.0.0.1:38707: read: connection reset by peer
	I1222 01:38:35.433786 1685746 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-869293
	
	I1222 01:38:35.433813 1685746 ubuntu.go:182] provisioning hostname "newest-cni-869293"
	I1222 01:38:35.433886 1685746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:38:35.451516 1685746 main.go:144] libmachine: Using SSH client type: native
	I1222 01:38:35.451830 1685746 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38707 <nil> <nil>}
	I1222 01:38:35.451848 1685746 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-869293 && echo "newest-cni-869293" | sudo tee /etc/hostname
	I1222 01:38:35.591409 1685746 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-869293
	
	I1222 01:38:35.591519 1685746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:38:35.609341 1685746 main.go:144] libmachine: Using SSH client type: native
	I1222 01:38:35.609647 1685746 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38707 <nil> <nil>}
	I1222 01:38:35.609670 1685746 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-869293' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-869293/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-869293' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1222 01:38:35.742798 1685746 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 01:38:35.742824 1685746 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22179-1395000/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-1395000/.minikube}
	I1222 01:38:35.742864 1685746 ubuntu.go:190] setting up certificates
	I1222 01:38:35.742881 1685746 provision.go:84] configureAuth start
	I1222 01:38:35.742942 1685746 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-869293
	I1222 01:38:35.763152 1685746 provision.go:143] copyHostCerts
	I1222 01:38:35.763214 1685746 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.pem, removing ...
	I1222 01:38:35.763230 1685746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.pem
	I1222 01:38:35.763306 1685746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.pem (1082 bytes)
	I1222 01:38:35.763401 1685746 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1395000/.minikube/cert.pem, removing ...
	I1222 01:38:35.763407 1685746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1395000/.minikube/cert.pem
	I1222 01:38:35.763431 1685746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-1395000/.minikube/cert.pem (1123 bytes)
	I1222 01:38:35.763483 1685746 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1395000/.minikube/key.pem, removing ...
	I1222 01:38:35.763490 1685746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1395000/.minikube/key.pem
	I1222 01:38:35.763514 1685746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-1395000/.minikube/key.pem (1679 bytes)
	I1222 01:38:35.763557 1685746 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca-key.pem org=jenkins.newest-cni-869293 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-869293]
	I1222 01:38:35.889485 1685746 provision.go:177] copyRemoteCerts
	I1222 01:38:35.889557 1685746 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1222 01:38:35.889605 1685746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:38:35.914143 1685746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38707 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/newest-cni-869293/id_rsa Username:docker}
	I1222 01:38:36.016150 1685746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1222 01:38:36.035930 1685746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1222 01:38:36.054716 1685746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1222 01:38:36.072586 1685746 provision.go:87] duration metric: took 329.680992ms to configureAuth
	I1222 01:38:36.072618 1685746 ubuntu.go:206] setting minikube options for container-runtime
	I1222 01:38:36.072830 1685746 config.go:182] Loaded profile config "newest-cni-869293": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1222 01:38:36.072842 1685746 machine.go:97] duration metric: took 3.806623107s to provisionDockerMachine
	I1222 01:38:36.072850 1685746 start.go:293] postStartSetup for "newest-cni-869293" (driver="docker")
	I1222 01:38:36.072866 1685746 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1222 01:38:36.072926 1685746 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1222 01:38:36.072980 1685746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:38:36.090324 1685746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38707 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/newest-cni-869293/id_rsa Username:docker}
	I1222 01:38:36.187013 1685746 ssh_runner.go:195] Run: cat /etc/os-release
	I1222 01:38:36.191029 1685746 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1222 01:38:36.191111 1685746 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1222 01:38:36.191134 1685746 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1395000/.minikube/addons for local assets ...
	I1222 01:38:36.191215 1685746 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1395000/.minikube/files for local assets ...
	I1222 01:38:36.191355 1685746 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem -> 13968642.pem in /etc/ssl/certs
	I1222 01:38:36.191477 1685746 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1222 01:38:36.200008 1685746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem --> /etc/ssl/certs/13968642.pem (1708 bytes)
	I1222 01:38:36.219292 1685746 start.go:296] duration metric: took 146.420744ms for postStartSetup
	I1222 01:38:36.219381 1685746 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 01:38:36.219430 1685746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:38:36.237412 1685746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38707 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/newest-cni-869293/id_rsa Username:docker}
	I1222 01:38:36.336664 1685746 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1222 01:38:36.342619 1685746 fix.go:56] duration metric: took 4.429400761s for fixHost
	I1222 01:38:36.342646 1685746 start.go:83] releasing machines lock for "newest-cni-869293", held for 4.429452897s
	I1222 01:38:36.342750 1685746 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-869293
	I1222 01:38:36.362211 1685746 ssh_runner.go:195] Run: cat /version.json
	I1222 01:38:36.362264 1685746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:38:36.362344 1685746 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1222 01:38:36.362407 1685746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:38:36.385216 1685746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38707 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/newest-cni-869293/id_rsa Username:docker}
	I1222 01:38:36.393122 1685746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38707 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/newest-cni-869293/id_rsa Username:docker}
	I1222 01:38:36.571819 1685746 ssh_runner.go:195] Run: systemctl --version
	I1222 01:38:36.578591 1685746 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1222 01:38:36.583121 1685746 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1222 01:38:36.583193 1685746 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1222 01:38:36.591539 1685746 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1222 01:38:36.591564 1685746 start.go:496] detecting cgroup driver to use...
	I1222 01:38:36.591620 1685746 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 01:38:36.591689 1685746 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1222 01:38:36.609980 1685746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1222 01:38:36.623763 1685746 docker.go:218] disabling cri-docker service (if available) ...
	I1222 01:38:36.623883 1685746 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1222 01:38:36.639236 1685746 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1222 01:38:36.652937 1685746 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1222 01:38:36.763224 1685746 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1222 01:38:36.883204 1685746 docker.go:234] disabling docker service ...
	I1222 01:38:36.883275 1685746 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1222 01:38:36.898372 1685746 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1222 01:38:36.911453 1685746 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1222 01:38:37.034252 1685746 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1222 01:38:37.157335 1685746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1222 01:38:37.170564 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 01:38:37.185195 1685746 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1222 01:38:37.194710 1685746 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1222 01:38:37.204647 1685746 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1222 01:38:37.204731 1685746 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1222 01:38:37.214808 1685746 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1222 01:38:37.223830 1685746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1222 01:38:37.232600 1685746 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1222 01:38:37.242680 1685746 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1222 01:38:37.254369 1685746 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1222 01:38:37.265094 1685746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1222 01:38:37.278711 1685746 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1222 01:38:37.288297 1685746 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1222 01:38:37.299386 1685746 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1222 01:38:37.306803 1685746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:38:37.412668 1685746 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1222 01:38:37.531042 1685746 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1222 01:38:37.531187 1685746 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1222 01:38:37.535291 1685746 start.go:564] Will wait 60s for crictl version
	I1222 01:38:37.535398 1685746 ssh_runner.go:195] Run: which crictl
	I1222 01:38:37.539239 1685746 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1222 01:38:37.568186 1685746 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.1
	RuntimeApiVersion:  v1
	I1222 01:38:37.568329 1685746 ssh_runner.go:195] Run: containerd --version
	I1222 01:38:37.589324 1685746 ssh_runner.go:195] Run: containerd --version
	I1222 01:38:37.614497 1685746 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.1 ...
	I1222 01:38:37.617592 1685746 cli_runner.go:164] Run: docker network inspect newest-cni-869293 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 01:38:37.633737 1685746 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1222 01:38:37.637631 1685746 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 01:38:37.650774 1685746 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1222 01:38:37.653725 1685746 kubeadm.go:884] updating cluster {Name:newest-cni-869293 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-869293 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1222 01:38:37.653882 1685746 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1222 01:38:37.653965 1685746 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 01:38:37.679481 1685746 containerd.go:627] all images are preloaded for containerd runtime.
	I1222 01:38:37.679507 1685746 containerd.go:534] Images already preloaded, skipping extraction
	I1222 01:38:37.679567 1685746 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 01:38:37.707944 1685746 containerd.go:627] all images are preloaded for containerd runtime.
	I1222 01:38:37.707969 1685746 cache_images.go:86] Images are preloaded, skipping loading
	I1222 01:38:37.707979 1685746 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-rc.1 containerd true true} ...
	I1222 01:38:37.708083 1685746 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-869293 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-869293 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1222 01:38:37.708165 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
	I1222 01:38:37.740577 1685746 cni.go:84] Creating CNI manager for ""
	I1222 01:38:37.740600 1685746 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1222 01:38:37.740621 1685746 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1222 01:38:37.740645 1685746 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-869293 NodeName:newest-cni-869293 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1222 01:38:37.740759 1685746 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-869293"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1222 01:38:37.740831 1685746 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1222 01:38:37.749395 1685746 binaries.go:51] Found k8s binaries, skipping transfer
	I1222 01:38:37.749470 1685746 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1222 01:38:37.757587 1685746 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1222 01:38:37.770794 1685746 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1222 01:38:37.784049 1685746 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2233 bytes)
	I1222 01:38:37.797792 1685746 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1222 01:38:37.801552 1685746 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 01:38:37.811598 1685746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:38:37.940636 1685746 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 01:38:37.962625 1685746 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293 for IP: 192.168.76.2
	I1222 01:38:37.962649 1685746 certs.go:195] generating shared ca certs ...
	I1222 01:38:37.962682 1685746 certs.go:227] acquiring lock for ca certs: {Name:mk4e1172e73c8d9b926824a39d7e920772302ed7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:38:37.962837 1685746 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.key
	I1222 01:38:37.962900 1685746 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.key
	I1222 01:38:37.962912 1685746 certs.go:257] generating profile certs ...
	I1222 01:38:37.963014 1685746 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/client.key
	I1222 01:38:37.963084 1685746 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/apiserver.key.70db33ce
	I1222 01:38:37.963128 1685746 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/proxy-client.key
	I1222 01:38:37.963238 1685746 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/1396864.pem (1338 bytes)
	W1222 01:38:37.963276 1685746 certs.go:480] ignoring /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/1396864_empty.pem, impossibly tiny 0 bytes
	I1222 01:38:37.963287 1685746 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca-key.pem (1675 bytes)
	I1222 01:38:37.963316 1685746 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem (1082 bytes)
	I1222 01:38:37.963343 1685746 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem (1123 bytes)
	I1222 01:38:37.963379 1685746 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/key.pem (1679 bytes)
	I1222 01:38:37.963434 1685746 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem (1708 bytes)
	I1222 01:38:37.964596 1685746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1222 01:38:37.999913 1685746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1222 01:38:38.025465 1685746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1222 01:38:38.053443 1685746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1222 01:38:38.087732 1685746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1222 01:38:38.107200 1685746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1222 01:38:38.125482 1685746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1222 01:38:38.143284 1685746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1222 01:38:38.161557 1685746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem --> /usr/share/ca-certificates/13968642.pem (1708 bytes)
	I1222 01:38:38.180124 1685746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1222 01:38:38.198446 1685746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/1396864.pem --> /usr/share/ca-certificates/1396864.pem (1338 bytes)
	I1222 01:38:38.215766 1685746 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1222 01:38:38.228774 1685746 ssh_runner.go:195] Run: openssl version
	I1222 01:38:38.235631 1685746 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/13968642.pem
	I1222 01:38:38.244039 1685746 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/13968642.pem /etc/ssl/certs/13968642.pem
	I1222 01:38:38.252123 1685746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13968642.pem
	I1222 01:38:38.256169 1685746 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 00:13 /usr/share/ca-certificates/13968642.pem
	I1222 01:38:38.256240 1685746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13968642.pem
	I1222 01:38:38.297738 1685746 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1222 01:38:38.305673 1685746 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:38:38.313250 1685746 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1222 01:38:38.321143 1685746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:38:38.325161 1685746 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 00:04 /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:38:38.325259 1685746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:38:38.366760 1685746 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1222 01:38:38.375589 1685746 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1396864.pem
	I1222 01:38:38.383142 1685746 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1396864.pem /etc/ssl/certs/1396864.pem
	I1222 01:38:38.391262 1685746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1396864.pem
	I1222 01:38:38.395405 1685746 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 00:13 /usr/share/ca-certificates/1396864.pem
	I1222 01:38:38.395474 1685746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1396864.pem
	I1222 01:38:38.436708 1685746 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1222 01:38:38.444445 1685746 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 01:38:38.448390 1685746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1222 01:38:38.489618 1685746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1222 01:38:38.530725 1685746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1222 01:38:38.571636 1685746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1222 01:38:38.612592 1685746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1222 01:38:38.653872 1685746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1222 01:38:38.695135 1685746 kubeadm.go:401] StartCluster: {Name:newest-cni-869293 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-869293 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.
L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:38:38.695236 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1222 01:38:38.695304 1685746 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 01:38:38.730406 1685746 cri.go:96] found id: ""
	I1222 01:38:38.730480 1685746 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1222 01:38:38.742929 1685746 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1222 01:38:38.742952 1685746 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1222 01:38:38.743012 1685746 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1222 01:38:38.765617 1685746 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1222 01:38:38.766245 1685746 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-869293" does not appear in /home/jenkins/minikube-integration/22179-1395000/kubeconfig
	I1222 01:38:38.766510 1685746 kubeconfig.go:62] /home/jenkins/minikube-integration/22179-1395000/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-869293" cluster setting kubeconfig missing "newest-cni-869293" context setting]
	I1222 01:38:38.766957 1685746 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/kubeconfig: {Name:mk08a9ce05fb3d2aec66c2d4776a38c1f64c42df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:38:38.768687 1685746 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1222 01:38:38.776658 1685746 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1222 01:38:38.776695 1685746 kubeadm.go:602] duration metric: took 33.737033ms to restartPrimaryControlPlane
	I1222 01:38:38.776705 1685746 kubeadm.go:403] duration metric: took 81.581475ms to StartCluster
	I1222 01:38:38.776720 1685746 settings.go:142] acquiring lock: {Name:mk10fbc51e5384d725fd46ab36048358281cb87b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:38:38.776793 1685746 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22179-1395000/kubeconfig
	I1222 01:38:38.777670 1685746 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/kubeconfig: {Name:mk08a9ce05fb3d2aec66c2d4776a38c1f64c42df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:38:38.777888 1685746 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1222 01:38:38.778285 1685746 config.go:182] Loaded profile config "newest-cni-869293": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1222 01:38:38.778259 1685746 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1222 01:38:38.778393 1685746 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-869293"
	I1222 01:38:38.778408 1685746 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-869293"
	I1222 01:38:38.778433 1685746 host.go:66] Checking if "newest-cni-869293" exists ...
	I1222 01:38:38.778917 1685746 cli_runner.go:164] Run: docker container inspect newest-cni-869293 --format={{.State.Status}}
	I1222 01:38:38.779098 1685746 addons.go:70] Setting dashboard=true in profile "newest-cni-869293"
	I1222 01:38:38.779126 1685746 addons.go:239] Setting addon dashboard=true in "newest-cni-869293"
	W1222 01:38:38.779211 1685746 addons.go:248] addon dashboard should already be in state true
	I1222 01:38:38.779264 1685746 host.go:66] Checking if "newest-cni-869293" exists ...
	I1222 01:38:38.779355 1685746 addons.go:70] Setting default-storageclass=true in profile "newest-cni-869293"
	I1222 01:38:38.779382 1685746 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-869293"
	I1222 01:38:38.779657 1685746 cli_runner.go:164] Run: docker container inspect newest-cni-869293 --format={{.State.Status}}
	I1222 01:38:38.780717 1685746 cli_runner.go:164] Run: docker container inspect newest-cni-869293 --format={{.State.Status}}
	I1222 01:38:38.783183 1685746 out.go:179] * Verifying Kubernetes components...
	I1222 01:38:38.795835 1685746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:38:38.839727 1685746 addons.go:239] Setting addon default-storageclass=true in "newest-cni-869293"
	I1222 01:38:38.839773 1685746 host.go:66] Checking if "newest-cni-869293" exists ...
	I1222 01:38:38.844706 1685746 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1222 01:38:38.844788 1685746 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1222 01:38:38.845056 1685746 cli_runner.go:164] Run: docker container inspect newest-cni-869293 --format={{.State.Status}}
	I1222 01:38:38.847706 1685746 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 01:38:38.847732 1685746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1222 01:38:38.847798 1685746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:38:38.850623 1685746 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1222 01:38:38.856243 1685746 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1222 01:38:38.856273 1685746 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1222 01:38:38.856351 1685746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:38:38.873943 1685746 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1222 01:38:38.873976 1685746 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1222 01:38:38.874046 1685746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:38:38.897069 1685746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38707 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/newest-cni-869293/id_rsa Username:docker}
	I1222 01:38:38.917887 1685746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38707 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/newest-cni-869293/id_rsa Username:docker}
	I1222 01:38:38.925239 1685746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38707 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/newest-cni-869293/id_rsa Username:docker}
	I1222 01:38:39.040289 1685746 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 01:38:39.062591 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 01:38:39.071403 1685746 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1222 01:38:39.071429 1685746 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1222 01:38:39.085714 1685746 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1222 01:38:39.085742 1685746 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1222 01:38:39.113564 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1222 01:38:39.117642 1685746 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1222 01:38:39.117668 1685746 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1222 01:38:39.160317 1685746 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1222 01:38:39.160342 1685746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1222 01:38:39.179666 1685746 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1222 01:38:39.179693 1685746 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1222 01:38:39.195940 1685746 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1222 01:38:39.195967 1685746 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1222 01:38:39.211128 1685746 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1222 01:38:39.211152 1685746 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1222 01:38:39.229341 1685746 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1222 01:38:39.229367 1685746 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1222 01:38:39.242863 1685746 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1222 01:38:39.242891 1685746 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1222 01:38:39.257396 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1222 01:38:39.740898 1685746 api_server.go:52] waiting for apiserver process to appear ...
	I1222 01:38:39.740996 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:38:39.741091 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:38:39.741148 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:39.741150 1685746 retry.go:84] will retry after 300ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:38:39.741362 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:39.924082 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:38:39.987453 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:40.012530 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 01:38:40.076254 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:38:40.106299 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:38:40.156991 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:40.241110 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:40.291973 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1222 01:38:40.350617 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:38:40.361182 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:40.389531 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:38:40.437774 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:38:40.465333 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:40.692837 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1222 01:38:40.741460 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:38:40.766384 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:40.961925 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 01:38:40.997418 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:38:41.047996 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:38:41.103696 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:41.241962 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:41.674831 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1222 01:38:41.741299 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:38:41.744404 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:42.118142 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:38:42.189177 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:42.241414 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:42.263947 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:38:42.333305 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:42.741698 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:43.241589 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:43.265699 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:38:43.338843 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:43.509282 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 01:38:43.559893 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:38:43.581660 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:38:43.623026 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:43.741112 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:44.241130 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:44.741229 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:44.931703 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:38:45.008485 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:45.244431 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:45.741178 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:45.765524 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:38:45.843868 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:45.977122 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:38:46.040374 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:46.241453 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:46.486248 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:38:46.559168 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:46.741869 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:47.241095 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:47.741431 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:48.241112 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:48.294921 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:38:48.361284 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:48.741773 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:48.852570 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:38:48.911873 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:49.241377 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:49.368148 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:38:49.429800 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:49.741220 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:50.241219 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:50.741547 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:51.241159 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:51.741774 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:52.241901 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:52.391494 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:38:52.452597 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:52.452636 1685746 retry.go:84] will retry after 11.6s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:52.508552 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:38:52.579056 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:52.741603 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:53.241037 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:53.297681 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:38:53.358617 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:53.741128 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:54.241259 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:54.741444 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:55.241131 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:55.741185 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:56.241903 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:56.742022 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:56.871217 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:38:56.931377 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:56.931421 1685746 retry.go:84] will retry after 12.9s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:57.241904 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:57.741132 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:58.241082 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:58.741129 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:59.241514 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:59.741571 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:00.241104 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:00.342627 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:39:00.433191 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:39:00.433235 1685746 retry.go:84] will retry after 8.1s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:39:00.741833 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:01.241455 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:01.741502 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:02.241599 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:02.741070 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:03.241152 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:03.741076 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:04.041996 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:39:04.111760 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:39:04.111812 1685746 retry.go:84] will retry after 10s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:39:04.242089 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:04.741350 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:05.241736 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:05.741098 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:06.241279 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:06.742311 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:07.241927 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:07.741133 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:08.241157 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:08.532510 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:39:08.603273 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:39:08.603314 1685746 retry.go:84] will retry after 7.8s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:39:08.741625 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:09.241616 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:09.741180 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:09.845450 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:39:09.907468 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:39:10.242040 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:10.742004 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:11.242043 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:11.741028 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:12.241114 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:12.741779 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:13.241398 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:13.741757 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:14.084932 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:39:14.149870 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:39:14.149915 1685746 retry.go:84] will retry after 13.2s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:39:14.241288 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:14.742009 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:15.241500 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:15.741459 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:16.241659 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:16.395227 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:39:16.456949 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:39:16.741507 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:17.241459 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:17.741042 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:18.241111 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:18.741162 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:19.241875 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:19.741715 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:20.241732 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:20.741105 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:21.241347 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:21.741639 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:22.241911 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:22.742051 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:23.241970 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:23.741127 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:24.241560 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:24.741692 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:25.241106 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:25.741122 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:26.241137 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:26.741585 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:27.241155 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:27.301256 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:39:27.375517 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:39:27.375598 1685746 retry.go:84] will retry after 26.5s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:39:27.741076 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:28.241034 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:28.741642 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:29.226555 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 01:39:29.242011 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:39:29.291422 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:39:29.741622 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:30.245888 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:30.741105 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:31.241550 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:31.741066 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:32.241183 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:32.741695 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:33.241134 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:33.741807 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:34.241685 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:34.741125 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:35.241915 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:35.741241 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:36.241639 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:36.741652 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:37.241141 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:37.741891 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:38.054310 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:39:38.118505 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:39:38.118547 1685746 retry.go:84] will retry after 47.2s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:39:38.241764 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:38.741459 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:39.241609 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:39:39.241696 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:39:39.269891 1685746 cri.go:96] found id: ""
	I1222 01:39:39.269914 1685746 logs.go:282] 0 containers: []
	W1222 01:39:39.269923 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:39:39.269930 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:39:39.269991 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:39:39.300389 1685746 cri.go:96] found id: ""
	I1222 01:39:39.300414 1685746 logs.go:282] 0 containers: []
	W1222 01:39:39.300423 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:39:39.300430 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:39:39.300501 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:39:39.326557 1685746 cri.go:96] found id: ""
	I1222 01:39:39.326582 1685746 logs.go:282] 0 containers: []
	W1222 01:39:39.326592 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:39:39.326598 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:39:39.326697 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:39:39.354049 1685746 cri.go:96] found id: ""
	I1222 01:39:39.354115 1685746 logs.go:282] 0 containers: []
	W1222 01:39:39.354125 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:39:39.354132 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:39:39.354202 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:39:39.380457 1685746 cri.go:96] found id: ""
	I1222 01:39:39.380490 1685746 logs.go:282] 0 containers: []
	W1222 01:39:39.380500 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:39:39.380507 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:39:39.380577 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:39:39.407039 1685746 cri.go:96] found id: ""
	I1222 01:39:39.407062 1685746 logs.go:282] 0 containers: []
	W1222 01:39:39.407070 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:39:39.407076 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:39:39.407139 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:39:39.431541 1685746 cri.go:96] found id: ""
	I1222 01:39:39.431568 1685746 logs.go:282] 0 containers: []
	W1222 01:39:39.431577 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:39:39.431584 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:39:39.431676 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:39:39.457555 1685746 cri.go:96] found id: ""
	I1222 01:39:39.457588 1685746 logs.go:282] 0 containers: []
	W1222 01:39:39.457607 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:39:39.457616 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:39:39.457629 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:39:39.517907 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:39:39.517997 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:39:39.534348 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:39:39.534373 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:39:39.607407 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:39:39.598204    1848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:39.599085    1848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:39.600659    1848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:39.601166    1848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:39.602826    1848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:39:39.598204    1848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:39.599085    1848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:39.600659    1848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:39.601166    1848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:39.602826    1848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:39:39.607438 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:39:39.607463 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:39:39.634050 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:39:39.634094 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:39:42.163786 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:42.176868 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:39:42.176959 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:39:42.208642 1685746 cri.go:96] found id: ""
	I1222 01:39:42.208672 1685746 logs.go:282] 0 containers: []
	W1222 01:39:42.208682 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:39:42.208688 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:39:42.208757 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:39:42.249523 1685746 cri.go:96] found id: ""
	I1222 01:39:42.249552 1685746 logs.go:282] 0 containers: []
	W1222 01:39:42.249562 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:39:42.249569 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:39:42.249641 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:39:42.283515 1685746 cri.go:96] found id: ""
	I1222 01:39:42.283542 1685746 logs.go:282] 0 containers: []
	W1222 01:39:42.283550 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:39:42.283557 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:39:42.283659 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:39:42.312237 1685746 cri.go:96] found id: ""
	I1222 01:39:42.312260 1685746 logs.go:282] 0 containers: []
	W1222 01:39:42.312269 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:39:42.312276 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:39:42.312335 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:39:42.341269 1685746 cri.go:96] found id: ""
	I1222 01:39:42.341297 1685746 logs.go:282] 0 containers: []
	W1222 01:39:42.341306 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:39:42.341312 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:39:42.341374 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:39:42.367696 1685746 cri.go:96] found id: ""
	I1222 01:39:42.367723 1685746 logs.go:282] 0 containers: []
	W1222 01:39:42.367732 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:39:42.367739 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:39:42.367804 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:39:42.396577 1685746 cri.go:96] found id: ""
	I1222 01:39:42.396602 1685746 logs.go:282] 0 containers: []
	W1222 01:39:42.396612 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:39:42.396618 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:39:42.396689 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:39:42.426348 1685746 cri.go:96] found id: ""
	I1222 01:39:42.426380 1685746 logs.go:282] 0 containers: []
	W1222 01:39:42.426392 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:39:42.426413 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:39:42.426433 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:39:42.481969 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:39:42.482005 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:39:42.499357 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:39:42.499436 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:39:42.576627 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:39:42.568338    1966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:42.569235    1966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:42.570839    1966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:42.571221    1966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:42.572936    1966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:39:42.568338    1966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:42.569235    1966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:42.570839    1966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:42.571221    1966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:42.572936    1966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:39:42.576649 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:39:42.576663 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:39:42.601751 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:39:42.601784 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:39:45.131239 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:45.157288 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:39:45.157379 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:39:45.207917 1685746 cri.go:96] found id: ""
	I1222 01:39:45.207953 1685746 logs.go:282] 0 containers: []
	W1222 01:39:45.207963 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:39:45.207975 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:39:45.208042 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:39:45.255413 1685746 cri.go:96] found id: ""
	I1222 01:39:45.255448 1685746 logs.go:282] 0 containers: []
	W1222 01:39:45.255459 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:39:45.255467 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:39:45.255564 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:39:45.300163 1685746 cri.go:96] found id: ""
	I1222 01:39:45.300196 1685746 logs.go:282] 0 containers: []
	W1222 01:39:45.300206 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:39:45.300214 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:39:45.300285 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:39:45.348918 1685746 cri.go:96] found id: ""
	I1222 01:39:45.348943 1685746 logs.go:282] 0 containers: []
	W1222 01:39:45.348952 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:39:45.348959 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:39:45.349022 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:39:45.379477 1685746 cri.go:96] found id: ""
	I1222 01:39:45.379502 1685746 logs.go:282] 0 containers: []
	W1222 01:39:45.379512 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:39:45.379518 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:39:45.379580 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:39:45.410514 1685746 cri.go:96] found id: ""
	I1222 01:39:45.410535 1685746 logs.go:282] 0 containers: []
	W1222 01:39:45.410543 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:39:45.410550 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:39:45.410611 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:39:45.436661 1685746 cri.go:96] found id: ""
	I1222 01:39:45.436686 1685746 logs.go:282] 0 containers: []
	W1222 01:39:45.436695 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:39:45.436702 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:39:45.436769 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:39:45.466972 1685746 cri.go:96] found id: ""
	I1222 01:39:45.467001 1685746 logs.go:282] 0 containers: []
	W1222 01:39:45.467010 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:39:45.467019 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:39:45.467032 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:39:45.567688 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:39:45.558837    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:45.559539    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:45.561202    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:45.561740    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:45.563191    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:39:45.558837    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:45.559539    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:45.561202    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:45.561740    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:45.563191    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:39:45.567712 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:39:45.567731 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:39:45.593712 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:39:45.593757 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:39:45.626150 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:39:45.626179 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:39:45.681273 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:39:45.681310 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:39:48.196684 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:48.207640 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:39:48.207718 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:39:48.232650 1685746 cri.go:96] found id: ""
	I1222 01:39:48.232680 1685746 logs.go:282] 0 containers: []
	W1222 01:39:48.232688 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:39:48.232708 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:39:48.232772 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:39:48.264801 1685746 cri.go:96] found id: ""
	I1222 01:39:48.264831 1685746 logs.go:282] 0 containers: []
	W1222 01:39:48.264841 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:39:48.264848 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:39:48.264915 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:39:48.300270 1685746 cri.go:96] found id: ""
	I1222 01:39:48.300300 1685746 logs.go:282] 0 containers: []
	W1222 01:39:48.300310 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:39:48.300317 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:39:48.300388 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:39:48.334711 1685746 cri.go:96] found id: ""
	I1222 01:39:48.334782 1685746 logs.go:282] 0 containers: []
	W1222 01:39:48.334806 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:39:48.334821 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:39:48.334898 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:39:48.359955 1685746 cri.go:96] found id: ""
	I1222 01:39:48.360023 1685746 logs.go:282] 0 containers: []
	W1222 01:39:48.360038 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:39:48.360052 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:39:48.360124 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:39:48.386551 1685746 cri.go:96] found id: ""
	I1222 01:39:48.386574 1685746 logs.go:282] 0 containers: []
	W1222 01:39:48.386583 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:39:48.386589 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:39:48.386648 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:39:48.412026 1685746 cri.go:96] found id: ""
	I1222 01:39:48.412052 1685746 logs.go:282] 0 containers: []
	W1222 01:39:48.412062 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:39:48.412069 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:39:48.412129 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:39:48.440847 1685746 cri.go:96] found id: ""
	I1222 01:39:48.440870 1685746 logs.go:282] 0 containers: []
	W1222 01:39:48.440878 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:39:48.440887 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:39:48.440897 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:39:48.496591 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:39:48.496673 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:39:48.512755 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:39:48.512834 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:39:48.596174 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:39:48.587854    2187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:48.588773    2187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:48.590542    2187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:48.590879    2187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:48.592406    2187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:39:48.587854    2187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:48.588773    2187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:48.590542    2187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:48.590879    2187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:48.592406    2187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:39:48.596249 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:39:48.596281 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:39:48.621362 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:39:48.621397 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:39:51.155431 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:51.169542 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:39:51.169616 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:39:51.195476 1685746 cri.go:96] found id: ""
	I1222 01:39:51.195500 1685746 logs.go:282] 0 containers: []
	W1222 01:39:51.195509 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:39:51.195516 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:39:51.195585 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:39:51.220215 1685746 cri.go:96] found id: ""
	I1222 01:39:51.220240 1685746 logs.go:282] 0 containers: []
	W1222 01:39:51.220249 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:39:51.220255 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:39:51.220324 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:39:51.248478 1685746 cri.go:96] found id: ""
	I1222 01:39:51.248508 1685746 logs.go:282] 0 containers: []
	W1222 01:39:51.248527 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:39:51.248534 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:39:51.248594 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:39:51.282587 1685746 cri.go:96] found id: ""
	I1222 01:39:51.282615 1685746 logs.go:282] 0 containers: []
	W1222 01:39:51.282624 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:39:51.282630 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:39:51.282691 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:39:51.310999 1685746 cri.go:96] found id: ""
	I1222 01:39:51.311029 1685746 logs.go:282] 0 containers: []
	W1222 01:39:51.311038 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:39:51.311044 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:39:51.311105 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:39:51.338337 1685746 cri.go:96] found id: ""
	I1222 01:39:51.338414 1685746 logs.go:282] 0 containers: []
	W1222 01:39:51.338431 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:39:51.338438 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:39:51.338517 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:39:51.365554 1685746 cri.go:96] found id: ""
	I1222 01:39:51.365582 1685746 logs.go:282] 0 containers: []
	W1222 01:39:51.365591 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:39:51.365598 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:39:51.365656 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:39:51.389874 1685746 cri.go:96] found id: ""
	I1222 01:39:51.389903 1685746 logs.go:282] 0 containers: []
	W1222 01:39:51.389913 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:39:51.389922 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:39:51.389933 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:39:51.449732 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:39:51.449797 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:39:51.467573 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:39:51.467669 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:39:51.568437 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:39:51.561021    2299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:51.561697    2299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:51.563323    2299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:51.563918    2299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:51.564974    2299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:39:51.561021    2299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:51.561697    2299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:51.563323    2299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:51.563918    2299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:51.564974    2299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:39:51.568512 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:39:51.568561 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:39:51.595758 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:39:51.595841 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:39:53.905270 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:39:53.968241 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:39:53.968406 1685746 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1222 01:39:54.129563 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:54.143910 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:39:54.144012 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:39:54.169973 1685746 cri.go:96] found id: ""
	I1222 01:39:54.170009 1685746 logs.go:282] 0 containers: []
	W1222 01:39:54.170018 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:39:54.170042 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:39:54.170158 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:39:54.198811 1685746 cri.go:96] found id: ""
	I1222 01:39:54.198838 1685746 logs.go:282] 0 containers: []
	W1222 01:39:54.198847 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:39:54.198854 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:39:54.198917 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:39:54.224425 1685746 cri.go:96] found id: ""
	I1222 01:39:54.224452 1685746 logs.go:282] 0 containers: []
	W1222 01:39:54.224462 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:39:54.224468 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:39:54.224549 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:39:54.273957 1685746 cri.go:96] found id: ""
	I1222 01:39:54.273983 1685746 logs.go:282] 0 containers: []
	W1222 01:39:54.273992 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:39:54.273998 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:39:54.274059 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:39:54.306801 1685746 cri.go:96] found id: ""
	I1222 01:39:54.306826 1685746 logs.go:282] 0 containers: []
	W1222 01:39:54.306836 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:39:54.306842 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:39:54.306916 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:39:54.339513 1685746 cri.go:96] found id: ""
	I1222 01:39:54.339539 1685746 logs.go:282] 0 containers: []
	W1222 01:39:54.339548 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:39:54.339555 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:39:54.339617 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:39:54.365259 1685746 cri.go:96] found id: ""
	I1222 01:39:54.365285 1685746 logs.go:282] 0 containers: []
	W1222 01:39:54.365295 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:39:54.365301 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:39:54.365363 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:39:54.390271 1685746 cri.go:96] found id: ""
	I1222 01:39:54.390294 1685746 logs.go:282] 0 containers: []
	W1222 01:39:54.390303 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:39:54.390312 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:39:54.390324 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:39:54.445696 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:39:54.445728 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:39:54.460676 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:39:54.460751 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:39:54.537038 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:39:54.528481    2418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:54.529359    2418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:54.531148    2418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:54.531443    2418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:54.533489    2418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:39:54.528481    2418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:54.529359    2418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:54.531148    2418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:54.531443    2418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:54.533489    2418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:39:54.537060 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:39:54.537075 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:39:54.566201 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:39:54.566234 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:39:57.093953 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:57.104681 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:39:57.104755 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:39:57.132428 1685746 cri.go:96] found id: ""
	I1222 01:39:57.132455 1685746 logs.go:282] 0 containers: []
	W1222 01:39:57.132465 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:39:57.132472 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:39:57.132532 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:39:57.158487 1685746 cri.go:96] found id: ""
	I1222 01:39:57.158512 1685746 logs.go:282] 0 containers: []
	W1222 01:39:57.158521 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:39:57.158528 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:39:57.158589 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:39:57.184175 1685746 cri.go:96] found id: ""
	I1222 01:39:57.184203 1685746 logs.go:282] 0 containers: []
	W1222 01:39:57.184213 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:39:57.184219 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:39:57.184279 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:39:57.215724 1685746 cri.go:96] found id: ""
	I1222 01:39:57.215752 1685746 logs.go:282] 0 containers: []
	W1222 01:39:57.215761 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:39:57.215768 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:39:57.215830 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:39:57.252375 1685746 cri.go:96] found id: ""
	I1222 01:39:57.252408 1685746 logs.go:282] 0 containers: []
	W1222 01:39:57.252420 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:39:57.252427 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:39:57.252499 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:39:57.291286 1685746 cri.go:96] found id: ""
	I1222 01:39:57.291323 1685746 logs.go:282] 0 containers: []
	W1222 01:39:57.291333 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:39:57.291344 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:39:57.291408 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:39:57.322496 1685746 cri.go:96] found id: ""
	I1222 01:39:57.322577 1685746 logs.go:282] 0 containers: []
	W1222 01:39:57.322594 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:39:57.322602 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:39:57.322678 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:39:57.352695 1685746 cri.go:96] found id: ""
	I1222 01:39:57.352722 1685746 logs.go:282] 0 containers: []
	W1222 01:39:57.352731 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:39:57.352741 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:39:57.352754 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:39:57.410232 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:39:57.410271 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:39:57.425451 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:39:57.425481 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:39:57.498123 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:39:57.489260    2529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:57.490135    2529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:57.491903    2529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:57.492235    2529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:57.493977    2529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:39:57.489260    2529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:57.490135    2529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:57.491903    2529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:57.492235    2529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:57.493977    2529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:39:57.498197 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:39:57.498226 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:39:57.530586 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:39:57.530677 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:40:00.062361 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:00.152699 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:00.152784 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:00.243584 1685746 cri.go:96] found id: ""
	I1222 01:40:00.243618 1685746 logs.go:282] 0 containers: []
	W1222 01:40:00.243635 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:00.243645 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:00.243728 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:00.323644 1685746 cri.go:96] found id: ""
	I1222 01:40:00.323704 1685746 logs.go:282] 0 containers: []
	W1222 01:40:00.323720 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:00.323730 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:00.323805 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:00.411473 1685746 cri.go:96] found id: ""
	I1222 01:40:00.411502 1685746 logs.go:282] 0 containers: []
	W1222 01:40:00.411521 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:00.411532 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:00.411621 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:00.511894 1685746 cri.go:96] found id: ""
	I1222 01:40:00.511922 1685746 logs.go:282] 0 containers: []
	W1222 01:40:00.511933 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:00.511941 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:00.512015 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:00.575706 1685746 cri.go:96] found id: ""
	I1222 01:40:00.575736 1685746 logs.go:282] 0 containers: []
	W1222 01:40:00.575746 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:00.575753 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:00.575828 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:00.666886 1685746 cri.go:96] found id: ""
	I1222 01:40:00.666913 1685746 logs.go:282] 0 containers: []
	W1222 01:40:00.666922 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:00.666929 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:00.667011 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:00.704456 1685746 cri.go:96] found id: ""
	I1222 01:40:00.704490 1685746 logs.go:282] 0 containers: []
	W1222 01:40:00.704499 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:00.704513 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:00.704583 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:00.763369 1685746 cri.go:96] found id: ""
	I1222 01:40:00.763404 1685746 logs.go:282] 0 containers: []
	W1222 01:40:00.763415 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:00.763425 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:00.763439 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:00.822507 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:00.822546 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:40:00.839492 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:00.839529 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:00.911350 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:00.902540    2639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:00.903387    2639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:00.905036    2639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:00.905557    2639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:00.907072    2639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:00.902540    2639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:00.903387    2639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:00.905036    2639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:00.905557    2639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:00.907072    2639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:00.911374 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:00.911389 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:40:00.937901 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:00.937953 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:40:01.674108 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:40:01.748211 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:40:01.748257 1685746 retry.go:84] will retry after 28.8s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:40:03.469297 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:03.480071 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:03.480145 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:03.519512 1685746 cri.go:96] found id: ""
	I1222 01:40:03.519627 1685746 logs.go:282] 0 containers: []
	W1222 01:40:03.519661 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:03.519709 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:03.520078 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:03.555737 1685746 cri.go:96] found id: ""
	I1222 01:40:03.555763 1685746 logs.go:282] 0 containers: []
	W1222 01:40:03.555806 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:03.555819 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:03.555909 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:03.580955 1685746 cri.go:96] found id: ""
	I1222 01:40:03.580986 1685746 logs.go:282] 0 containers: []
	W1222 01:40:03.580995 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:03.581004 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:03.581068 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:03.610855 1685746 cri.go:96] found id: ""
	I1222 01:40:03.610935 1685746 logs.go:282] 0 containers: []
	W1222 01:40:03.610952 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:03.610961 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:03.611037 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:03.635994 1685746 cri.go:96] found id: ""
	I1222 01:40:03.636019 1685746 logs.go:282] 0 containers: []
	W1222 01:40:03.636027 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:03.636033 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:03.636103 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:03.661008 1685746 cri.go:96] found id: ""
	I1222 01:40:03.661086 1685746 logs.go:282] 0 containers: []
	W1222 01:40:03.661109 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:03.661132 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:03.661249 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:03.685551 1685746 cri.go:96] found id: ""
	I1222 01:40:03.685577 1685746 logs.go:282] 0 containers: []
	W1222 01:40:03.685586 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:03.685594 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:03.685653 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:03.710025 1685746 cri.go:96] found id: ""
	I1222 01:40:03.710054 1685746 logs.go:282] 0 containers: []
	W1222 01:40:03.710063 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:03.710073 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:03.710109 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:40:03.748992 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:03.749066 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:03.812952 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:03.812990 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:40:03.828176 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:03.828207 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:03.895557 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:03.885860    2768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:03.886437    2768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:03.889658    2768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:03.890260    2768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:03.891906    2768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:03.885860    2768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:03.886437    2768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:03.889658    2768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:03.890260    2768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:03.891906    2768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:03.895583 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:03.895596 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:40:06.421124 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:06.432321 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:06.432435 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:06.458845 1685746 cri.go:96] found id: ""
	I1222 01:40:06.458926 1685746 logs.go:282] 0 containers: []
	W1222 01:40:06.458944 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:06.458951 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:06.459024 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:06.483853 1685746 cri.go:96] found id: ""
	I1222 01:40:06.483881 1685746 logs.go:282] 0 containers: []
	W1222 01:40:06.483890 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:06.483897 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:06.483956 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:06.518710 1685746 cri.go:96] found id: ""
	I1222 01:40:06.518741 1685746 logs.go:282] 0 containers: []
	W1222 01:40:06.518750 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:06.518757 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:06.518821 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:06.549152 1685746 cri.go:96] found id: ""
	I1222 01:40:06.549183 1685746 logs.go:282] 0 containers: []
	W1222 01:40:06.549191 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:06.549198 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:06.549256 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:06.579003 1685746 cri.go:96] found id: ""
	I1222 01:40:06.579032 1685746 logs.go:282] 0 containers: []
	W1222 01:40:06.579041 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:06.579048 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:06.579110 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:06.614999 1685746 cri.go:96] found id: ""
	I1222 01:40:06.615029 1685746 logs.go:282] 0 containers: []
	W1222 01:40:06.615038 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:06.615045 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:06.615109 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:06.644049 1685746 cri.go:96] found id: ""
	I1222 01:40:06.644073 1685746 logs.go:282] 0 containers: []
	W1222 01:40:06.644082 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:06.644088 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:06.644150 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:06.670551 1685746 cri.go:96] found id: ""
	I1222 01:40:06.670580 1685746 logs.go:282] 0 containers: []
	W1222 01:40:06.670590 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:06.670599 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:06.670630 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:40:06.696127 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:06.696164 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:40:06.728583 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:06.728612 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:06.788068 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:06.788103 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:40:06.805676 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:06.805708 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:06.875097 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:06.866896    2880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:06.867468    2880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:06.869095    2880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:06.869679    2880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:06.871268    2880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:06.866896    2880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:06.867468    2880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:06.869095    2880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:06.869679    2880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:06.871268    2880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:09.375863 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:09.386805 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:09.386883 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:09.413272 1685746 cri.go:96] found id: ""
	I1222 01:40:09.413299 1685746 logs.go:282] 0 containers: []
	W1222 01:40:09.413307 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:09.413313 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:09.413374 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:09.438591 1685746 cri.go:96] found id: ""
	I1222 01:40:09.438615 1685746 logs.go:282] 0 containers: []
	W1222 01:40:09.438623 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:09.438630 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:09.438692 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:09.463919 1685746 cri.go:96] found id: ""
	I1222 01:40:09.463943 1685746 logs.go:282] 0 containers: []
	W1222 01:40:09.463952 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:09.463959 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:09.464026 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:09.493604 1685746 cri.go:96] found id: ""
	I1222 01:40:09.493627 1685746 logs.go:282] 0 containers: []
	W1222 01:40:09.493641 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:09.493648 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:09.493707 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:09.529370 1685746 cri.go:96] found id: ""
	I1222 01:40:09.529394 1685746 logs.go:282] 0 containers: []
	W1222 01:40:09.529404 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:09.529411 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:09.529477 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:09.562121 1685746 cri.go:96] found id: ""
	I1222 01:40:09.562150 1685746 logs.go:282] 0 containers: []
	W1222 01:40:09.562160 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:09.562167 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:09.562233 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:09.587896 1685746 cri.go:96] found id: ""
	I1222 01:40:09.587924 1685746 logs.go:282] 0 containers: []
	W1222 01:40:09.587935 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:09.587942 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:09.588010 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:09.613576 1685746 cri.go:96] found id: ""
	I1222 01:40:09.613600 1685746 logs.go:282] 0 containers: []
	W1222 01:40:09.613609 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:09.613619 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:09.613630 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:09.671590 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:09.671627 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:40:09.688438 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:09.688468 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:09.770484 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:09.761524    2979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:09.762731    2979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:09.764535    2979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:09.764916    2979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:09.766502    2979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:09.761524    2979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:09.762731    2979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:09.764535    2979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:09.764916    2979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:09.766502    2979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:09.770797 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:09.770834 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:40:09.803134 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:09.803237 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:40:12.334803 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:12.345660 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:12.345780 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:12.375026 1685746 cri.go:96] found id: ""
	I1222 01:40:12.375056 1685746 logs.go:282] 0 containers: []
	W1222 01:40:12.375067 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:12.375075 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:12.375154 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:12.400255 1685746 cri.go:96] found id: ""
	I1222 01:40:12.400282 1685746 logs.go:282] 0 containers: []
	W1222 01:40:12.400291 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:12.400299 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:12.400402 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:12.425430 1685746 cri.go:96] found id: ""
	I1222 01:40:12.425458 1685746 logs.go:282] 0 containers: []
	W1222 01:40:12.425467 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:12.425474 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:12.425535 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:12.450734 1685746 cri.go:96] found id: ""
	I1222 01:40:12.450816 1685746 logs.go:282] 0 containers: []
	W1222 01:40:12.450832 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:12.450841 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:12.450918 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:12.477690 1685746 cri.go:96] found id: ""
	I1222 01:40:12.477719 1685746 logs.go:282] 0 containers: []
	W1222 01:40:12.477735 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:12.477742 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:12.477803 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:12.517751 1685746 cri.go:96] found id: ""
	I1222 01:40:12.517779 1685746 logs.go:282] 0 containers: []
	W1222 01:40:12.517787 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:12.517794 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:12.517858 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:12.544749 1685746 cri.go:96] found id: ""
	I1222 01:40:12.544777 1685746 logs.go:282] 0 containers: []
	W1222 01:40:12.544786 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:12.544793 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:12.544858 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:12.576758 1685746 cri.go:96] found id: ""
	I1222 01:40:12.576786 1685746 logs.go:282] 0 containers: []
	W1222 01:40:12.576795 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:12.576805 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:12.576816 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:40:12.592450 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:12.592478 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:12.658073 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:12.649657    3090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:12.650391    3090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:12.652114    3090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:12.652710    3090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:12.654319    3090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:12.649657    3090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:12.650391    3090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:12.652114    3090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:12.652710    3090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:12.654319    3090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:12.658125 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:12.658138 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:40:12.683599 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:12.683637 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:40:12.715675 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:12.715707 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:15.275108 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:15.285651 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:15.285724 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:15.311249 1685746 cri.go:96] found id: ""
	I1222 01:40:15.311277 1685746 logs.go:282] 0 containers: []
	W1222 01:40:15.311287 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:15.311293 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:15.311353 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:15.336192 1685746 cri.go:96] found id: ""
	I1222 01:40:15.336218 1685746 logs.go:282] 0 containers: []
	W1222 01:40:15.336226 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:15.336234 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:15.336297 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:15.362231 1685746 cri.go:96] found id: ""
	I1222 01:40:15.362254 1685746 logs.go:282] 0 containers: []
	W1222 01:40:15.362263 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:15.362269 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:15.362331 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:15.390149 1685746 cri.go:96] found id: ""
	I1222 01:40:15.390176 1685746 logs.go:282] 0 containers: []
	W1222 01:40:15.390185 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:15.390192 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:15.390259 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:15.417421 1685746 cri.go:96] found id: ""
	I1222 01:40:15.417446 1685746 logs.go:282] 0 containers: []
	W1222 01:40:15.417456 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:15.417464 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:15.417530 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:15.444318 1685746 cri.go:96] found id: ""
	I1222 01:40:15.444346 1685746 logs.go:282] 0 containers: []
	W1222 01:40:15.444356 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:15.444368 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:15.444428 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:15.469475 1685746 cri.go:96] found id: ""
	I1222 01:40:15.469503 1685746 logs.go:282] 0 containers: []
	W1222 01:40:15.469512 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:15.469520 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:15.469581 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:15.501561 1685746 cri.go:96] found id: ""
	I1222 01:40:15.501588 1685746 logs.go:282] 0 containers: []
	W1222 01:40:15.501597 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:15.501606 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:15.501637 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:40:15.518032 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:15.518062 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:15.588024 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:15.580432    3204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:15.581037    3204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:15.582843    3204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:15.583305    3204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:15.584373    3204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:15.580432    3204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:15.581037    3204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:15.582843    3204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:15.583305    3204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:15.584373    3204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:15.588049 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:15.588062 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:40:15.613914 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:15.613953 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:40:15.645712 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:15.645739 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:18.200926 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:18.211578 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:18.211651 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:18.237396 1685746 cri.go:96] found id: ""
	I1222 01:40:18.237421 1685746 logs.go:282] 0 containers: []
	W1222 01:40:18.237429 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:18.237436 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:18.237503 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:18.264313 1685746 cri.go:96] found id: ""
	I1222 01:40:18.264345 1685746 logs.go:282] 0 containers: []
	W1222 01:40:18.264356 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:18.264369 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:18.264451 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:18.290240 1685746 cri.go:96] found id: ""
	I1222 01:40:18.290265 1685746 logs.go:282] 0 containers: []
	W1222 01:40:18.290274 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:18.290281 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:18.290340 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:18.315874 1685746 cri.go:96] found id: ""
	I1222 01:40:18.315898 1685746 logs.go:282] 0 containers: []
	W1222 01:40:18.315907 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:18.315914 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:18.315975 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:18.340813 1685746 cri.go:96] found id: ""
	I1222 01:40:18.340836 1685746 logs.go:282] 0 containers: []
	W1222 01:40:18.340844 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:18.340852 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:18.340912 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:18.368094 1685746 cri.go:96] found id: ""
	I1222 01:40:18.368119 1685746 logs.go:282] 0 containers: []
	W1222 01:40:18.368128 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:18.368135 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:18.368251 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:18.393525 1685746 cri.go:96] found id: ""
	I1222 01:40:18.393551 1685746 logs.go:282] 0 containers: []
	W1222 01:40:18.393559 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:18.393566 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:18.393629 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:18.419984 1685746 cri.go:96] found id: ""
	I1222 01:40:18.420011 1685746 logs.go:282] 0 containers: []
	W1222 01:40:18.420020 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:18.420031 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:18.420043 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:40:18.435061 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:18.435090 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:18.511216 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:18.502272    3309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:18.503063    3309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:18.504790    3309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:18.505424    3309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:18.507123    3309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:18.502272    3309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:18.503063    3309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:18.504790    3309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:18.505424    3309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:18.507123    3309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:18.511242 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:18.511258 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:40:18.539215 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:18.539253 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:40:18.571721 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:18.571752 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:21.133335 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:21.144470 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:21.144552 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:21.170402 1685746 cri.go:96] found id: ""
	I1222 01:40:21.170435 1685746 logs.go:282] 0 containers: []
	W1222 01:40:21.170444 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:21.170451 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:21.170514 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:21.197647 1685746 cri.go:96] found id: ""
	I1222 01:40:21.197674 1685746 logs.go:282] 0 containers: []
	W1222 01:40:21.197683 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:21.197690 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:21.197754 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:21.231085 1685746 cri.go:96] found id: ""
	I1222 01:40:21.231120 1685746 logs.go:282] 0 containers: []
	W1222 01:40:21.231130 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:21.231137 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:21.231243 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:21.268085 1685746 cri.go:96] found id: ""
	I1222 01:40:21.268112 1685746 logs.go:282] 0 containers: []
	W1222 01:40:21.268121 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:21.268129 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:21.268195 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:21.293752 1685746 cri.go:96] found id: ""
	I1222 01:40:21.293781 1685746 logs.go:282] 0 containers: []
	W1222 01:40:21.293791 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:21.293797 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:21.293864 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:21.320171 1685746 cri.go:96] found id: ""
	I1222 01:40:21.320195 1685746 logs.go:282] 0 containers: []
	W1222 01:40:21.320203 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:21.320210 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:21.320273 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:21.346069 1685746 cri.go:96] found id: ""
	I1222 01:40:21.346162 1685746 logs.go:282] 0 containers: []
	W1222 01:40:21.346177 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:21.346185 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:21.346246 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:21.371416 1685746 cri.go:96] found id: ""
	I1222 01:40:21.371443 1685746 logs.go:282] 0 containers: []
	W1222 01:40:21.371452 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:21.371462 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:21.371475 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:40:21.404674 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:21.404703 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:21.460348 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:21.460388 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:40:21.475958 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:21.475994 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:21.561495 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:21.552344    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:21.553051    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:21.555518    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:21.555905    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:21.557453    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:21.552344    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:21.553051    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:21.555518    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:21.555905    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:21.557453    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:21.561520 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:21.561533 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:40:24.089244 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:24.100814 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:24.100889 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:24.126847 1685746 cri.go:96] found id: ""
	I1222 01:40:24.126878 1685746 logs.go:282] 0 containers: []
	W1222 01:40:24.126888 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:24.126895 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:24.126959 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:24.152740 1685746 cri.go:96] found id: ""
	I1222 01:40:24.152768 1685746 logs.go:282] 0 containers: []
	W1222 01:40:24.152778 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:24.152784 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:24.152845 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:24.178506 1685746 cri.go:96] found id: ""
	I1222 01:40:24.178532 1685746 logs.go:282] 0 containers: []
	W1222 01:40:24.178540 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:24.178547 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:24.178628 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:24.210111 1685746 cri.go:96] found id: ""
	I1222 01:40:24.210138 1685746 logs.go:282] 0 containers: []
	W1222 01:40:24.210147 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:24.210156 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:24.210219 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:24.234336 1685746 cri.go:96] found id: ""
	I1222 01:40:24.234358 1685746 logs.go:282] 0 containers: []
	W1222 01:40:24.234372 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:24.234379 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:24.234440 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:24.259792 1685746 cri.go:96] found id: ""
	I1222 01:40:24.259861 1685746 logs.go:282] 0 containers: []
	W1222 01:40:24.259884 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:24.259898 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:24.259973 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:24.285594 1685746 cri.go:96] found id: ""
	I1222 01:40:24.285623 1685746 logs.go:282] 0 containers: []
	W1222 01:40:24.285632 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:24.285639 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:24.285722 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:24.312027 1685746 cri.go:96] found id: ""
	I1222 01:40:24.312055 1685746 logs.go:282] 0 containers: []
	W1222 01:40:24.312064 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:24.312074 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:24.312088 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:40:24.345845 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:24.345873 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:24.404101 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:24.404140 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:40:24.419436 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:24.419465 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:24.485147 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:24.477123    3551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:24.477548    3551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:24.479053    3551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:24.479387    3551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:24.480817    3551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:24.477123    3551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:24.477548    3551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:24.479053    3551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:24.479387    3551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:24.480817    3551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:24.485182 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:24.485195 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:40:25.275578 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:40:25.338578 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:40:25.338685 1685746 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1222 01:40:27.016338 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:27.030615 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:27.030685 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:27.060751 1685746 cri.go:96] found id: ""
	I1222 01:40:27.060775 1685746 logs.go:282] 0 containers: []
	W1222 01:40:27.060784 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:27.060791 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:27.060850 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:27.088784 1685746 cri.go:96] found id: ""
	I1222 01:40:27.088807 1685746 logs.go:282] 0 containers: []
	W1222 01:40:27.088816 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:27.088822 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:27.088889 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:27.115559 1685746 cri.go:96] found id: ""
	I1222 01:40:27.115581 1685746 logs.go:282] 0 containers: []
	W1222 01:40:27.115590 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:27.115596 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:27.115658 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:27.141509 1685746 cri.go:96] found id: ""
	I1222 01:40:27.141579 1685746 logs.go:282] 0 containers: []
	W1222 01:40:27.141602 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:27.141624 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:27.141712 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:27.168944 1685746 cri.go:96] found id: ""
	I1222 01:40:27.168984 1685746 logs.go:282] 0 containers: []
	W1222 01:40:27.168993 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:27.169006 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:27.169076 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:27.194554 1685746 cri.go:96] found id: ""
	I1222 01:40:27.194584 1685746 logs.go:282] 0 containers: []
	W1222 01:40:27.194593 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:27.194599 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:27.194662 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:27.219603 1685746 cri.go:96] found id: ""
	I1222 01:40:27.219684 1685746 logs.go:282] 0 containers: []
	W1222 01:40:27.219707 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:27.219721 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:27.219801 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:27.246999 1685746 cri.go:96] found id: ""
	I1222 01:40:27.247033 1685746 logs.go:282] 0 containers: []
	W1222 01:40:27.247042 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:27.247067 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:27.247087 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:27.302977 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:27.303012 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:40:27.318364 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:27.318398 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:27.385339 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:27.376631    3659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:27.377224    3659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:27.379740    3659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:27.380579    3659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:27.381699    3659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:27.376631    3659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:27.377224    3659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:27.379740    3659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:27.380579    3659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:27.381699    3659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:27.385413 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:27.385442 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:40:27.411346 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:27.411384 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:40:29.941731 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:29.955808 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:29.955883 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:29.982684 1685746 cri.go:96] found id: ""
	I1222 01:40:29.982709 1685746 logs.go:282] 0 containers: []
	W1222 01:40:29.982718 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:29.982725 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:29.982796 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:30.036793 1685746 cri.go:96] found id: ""
	I1222 01:40:30.036836 1685746 logs.go:282] 0 containers: []
	W1222 01:40:30.036847 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:30.036858 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:30.036986 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:30.127706 1685746 cri.go:96] found id: ""
	I1222 01:40:30.127740 1685746 logs.go:282] 0 containers: []
	W1222 01:40:30.127750 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:30.127757 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:30.127828 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:30.158476 1685746 cri.go:96] found id: ""
	I1222 01:40:30.158509 1685746 logs.go:282] 0 containers: []
	W1222 01:40:30.158521 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:30.158529 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:30.158598 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:30.187425 1685746 cri.go:96] found id: ""
	I1222 01:40:30.187453 1685746 logs.go:282] 0 containers: []
	W1222 01:40:30.187463 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:30.187470 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:30.187539 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:30.216013 1685746 cri.go:96] found id: ""
	I1222 01:40:30.216043 1685746 logs.go:282] 0 containers: []
	W1222 01:40:30.216052 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:30.216060 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:30.216125 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:30.241947 1685746 cri.go:96] found id: ""
	I1222 01:40:30.241975 1685746 logs.go:282] 0 containers: []
	W1222 01:40:30.241985 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:30.241991 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:30.242074 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:30.271569 1685746 cri.go:96] found id: ""
	I1222 01:40:30.271595 1685746 logs.go:282] 0 containers: []
	W1222 01:40:30.271603 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:30.271613 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:30.271625 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:30.327858 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:30.327896 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:40:30.343479 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:30.343505 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:30.411657 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:30.402221    3771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:30.402895    3771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:30.404480    3771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:30.404791    3771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:30.407032    3771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:30.402221    3771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:30.402895    3771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:30.404480    3771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:30.404791    3771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:30.407032    3771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:30.411678 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:30.411692 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:40:30.436851 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:30.436886 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:40:30.511390 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:40:30.582457 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:40:30.582560 1685746 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1222 01:40:30.587532 1685746 out.go:179] * Enabled addons: 
	I1222 01:40:30.590426 1685746 addons.go:530] duration metric: took 1m51.812167431s for enable addons: enabled=[]
	I1222 01:40:32.969406 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:32.980360 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:32.980444 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:33.016753 1685746 cri.go:96] found id: ""
	I1222 01:40:33.016778 1685746 logs.go:282] 0 containers: []
	W1222 01:40:33.016787 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:33.016795 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:33.016881 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:33.053288 1685746 cri.go:96] found id: ""
	I1222 01:40:33.053315 1685746 logs.go:282] 0 containers: []
	W1222 01:40:33.053334 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:33.053358 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:33.053457 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:33.087392 1685746 cri.go:96] found id: ""
	I1222 01:40:33.087417 1685746 logs.go:282] 0 containers: []
	W1222 01:40:33.087426 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:33.087432 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:33.087492 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:33.113564 1685746 cri.go:96] found id: ""
	I1222 01:40:33.113595 1685746 logs.go:282] 0 containers: []
	W1222 01:40:33.113604 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:33.113611 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:33.113698 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:33.143733 1685746 cri.go:96] found id: ""
	I1222 01:40:33.143757 1685746 logs.go:282] 0 containers: []
	W1222 01:40:33.143766 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:33.143772 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:33.143835 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:33.169776 1685746 cri.go:96] found id: ""
	I1222 01:40:33.169808 1685746 logs.go:282] 0 containers: []
	W1222 01:40:33.169816 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:33.169824 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:33.169887 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:33.198413 1685746 cri.go:96] found id: ""
	I1222 01:40:33.198438 1685746 logs.go:282] 0 containers: []
	W1222 01:40:33.198446 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:33.198453 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:33.198514 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:33.223746 1685746 cri.go:96] found id: ""
	I1222 01:40:33.223816 1685746 logs.go:282] 0 containers: []
	W1222 01:40:33.223838 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:33.223855 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:33.223866 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:40:33.249217 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:33.249247 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:40:33.282243 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:33.282269 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:33.340677 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:33.340714 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:40:33.355635 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:33.355667 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:33.438690 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:33.431030    3900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:33.431616    3900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:33.433134    3900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:33.433634    3900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:33.435107    3900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:33.431030    3900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:33.431616    3900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:33.433134    3900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:33.433634    3900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:33.435107    3900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:35.940454 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:35.954241 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:35.954312 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:35.979549 1685746 cri.go:96] found id: ""
	I1222 01:40:35.979576 1685746 logs.go:282] 0 containers: []
	W1222 01:40:35.979585 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:35.979592 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:35.979654 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:36.010177 1685746 cri.go:96] found id: ""
	I1222 01:40:36.010207 1685746 logs.go:282] 0 containers: []
	W1222 01:40:36.010217 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:36.010224 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:36.010295 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:36.045048 1685746 cri.go:96] found id: ""
	I1222 01:40:36.045078 1685746 logs.go:282] 0 containers: []
	W1222 01:40:36.045088 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:36.045095 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:36.045157 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:36.074866 1685746 cri.go:96] found id: ""
	I1222 01:40:36.074889 1685746 logs.go:282] 0 containers: []
	W1222 01:40:36.074897 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:36.074903 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:36.074965 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:36.101425 1685746 cri.go:96] found id: ""
	I1222 01:40:36.101499 1685746 logs.go:282] 0 containers: []
	W1222 01:40:36.101511 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:36.101518 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:36.106750 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:36.134167 1685746 cri.go:96] found id: ""
	I1222 01:40:36.134205 1685746 logs.go:282] 0 containers: []
	W1222 01:40:36.134215 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:36.134223 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:36.134288 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:36.159767 1685746 cri.go:96] found id: ""
	I1222 01:40:36.159792 1685746 logs.go:282] 0 containers: []
	W1222 01:40:36.159802 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:36.159809 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:36.159873 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:36.188878 1685746 cri.go:96] found id: ""
	I1222 01:40:36.188907 1685746 logs.go:282] 0 containers: []
	W1222 01:40:36.188917 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:36.188928 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:36.188941 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:36.253797 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:36.245184    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:36.246059    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:36.247790    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:36.248134    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:36.249657    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:36.245184    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:36.246059    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:36.247790    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:36.248134    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:36.249657    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:36.253877 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:36.253906 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:40:36.279371 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:36.279408 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:40:36.308866 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:36.308901 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:36.365568 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:36.365603 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:40:38.881766 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:38.892862 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:38.892944 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:38.919366 1685746 cri.go:96] found id: ""
	I1222 01:40:38.919399 1685746 logs.go:282] 0 containers: []
	W1222 01:40:38.919409 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:38.919421 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:38.919495 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:38.953015 1685746 cri.go:96] found id: ""
	I1222 01:40:38.953042 1685746 logs.go:282] 0 containers: []
	W1222 01:40:38.953051 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:38.953058 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:38.953121 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:38.979133 1685746 cri.go:96] found id: ""
	I1222 01:40:38.979158 1685746 logs.go:282] 0 containers: []
	W1222 01:40:38.979167 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:38.979173 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:38.979236 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:39.017688 1685746 cri.go:96] found id: ""
	I1222 01:40:39.017714 1685746 logs.go:282] 0 containers: []
	W1222 01:40:39.017724 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:39.017735 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:39.017797 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:39.056591 1685746 cri.go:96] found id: ""
	I1222 01:40:39.056614 1685746 logs.go:282] 0 containers: []
	W1222 01:40:39.056622 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:39.056629 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:39.056686 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:39.085085 1685746 cri.go:96] found id: ""
	I1222 01:40:39.085155 1685746 logs.go:282] 0 containers: []
	W1222 01:40:39.085177 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:39.085199 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:39.085296 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:39.114614 1685746 cri.go:96] found id: ""
	I1222 01:40:39.114640 1685746 logs.go:282] 0 containers: []
	W1222 01:40:39.114649 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:39.114656 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:39.114738 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:39.140466 1685746 cri.go:96] found id: ""
	I1222 01:40:39.140511 1685746 logs.go:282] 0 containers: []
	W1222 01:40:39.140520 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:39.140545 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:39.140564 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:39.208956 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:39.200655    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:39.201282    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:39.202774    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:39.203325    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:39.204797    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:39.200655    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:39.201282    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:39.202774    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:39.203325    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:39.204797    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:39.208979 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:39.208992 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:40:39.234396 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:39.234430 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:40:39.264983 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:39.265011 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:39.320138 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:39.320173 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:40:41.835978 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:41.846958 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:41.847061 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:41.872281 1685746 cri.go:96] found id: ""
	I1222 01:40:41.872307 1685746 logs.go:282] 0 containers: []
	W1222 01:40:41.872318 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:41.872324 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:41.872429 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:41.902068 1685746 cri.go:96] found id: ""
	I1222 01:40:41.902127 1685746 logs.go:282] 0 containers: []
	W1222 01:40:41.902137 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:41.902163 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:41.902275 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:41.936505 1685746 cri.go:96] found id: ""
	I1222 01:40:41.936535 1685746 logs.go:282] 0 containers: []
	W1222 01:40:41.936544 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:41.936550 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:41.936615 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:41.961446 1685746 cri.go:96] found id: ""
	I1222 01:40:41.961480 1685746 logs.go:282] 0 containers: []
	W1222 01:40:41.961489 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:41.961496 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:41.961569 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:41.989500 1685746 cri.go:96] found id: ""
	I1222 01:40:41.989582 1685746 logs.go:282] 0 containers: []
	W1222 01:40:41.989606 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:41.989631 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:41.989730 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:42.028918 1685746 cri.go:96] found id: ""
	I1222 01:40:42.028947 1685746 logs.go:282] 0 containers: []
	W1222 01:40:42.028956 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:42.028963 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:42.029037 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:42.065570 1685746 cri.go:96] found id: ""
	I1222 01:40:42.065618 1685746 logs.go:282] 0 containers: []
	W1222 01:40:42.065633 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:42.065641 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:42.065724 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:42.095634 1685746 cri.go:96] found id: ""
	I1222 01:40:42.095661 1685746 logs.go:282] 0 containers: []
	W1222 01:40:42.095671 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:42.095681 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:42.095702 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:42.158126 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:42.158170 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:40:42.175600 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:42.175640 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:42.256856 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:42.248823    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:42.249305    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:42.250890    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:42.251275    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:42.252835    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:42.248823    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:42.249305    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:42.250890    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:42.251275    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:42.252835    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:42.256882 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:42.256896 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:40:42.283618 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:42.283665 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:40:44.813189 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:44.824766 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:44.824836 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:44.853167 1685746 cri.go:96] found id: ""
	I1222 01:40:44.853192 1685746 logs.go:282] 0 containers: []
	W1222 01:40:44.853201 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:44.853208 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:44.853269 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:44.878679 1685746 cri.go:96] found id: ""
	I1222 01:40:44.878711 1685746 logs.go:282] 0 containers: []
	W1222 01:40:44.878721 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:44.878728 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:44.878792 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:44.905070 1685746 cri.go:96] found id: ""
	I1222 01:40:44.905097 1685746 logs.go:282] 0 containers: []
	W1222 01:40:44.905106 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:44.905113 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:44.905177 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:44.930494 1685746 cri.go:96] found id: ""
	I1222 01:40:44.930523 1685746 logs.go:282] 0 containers: []
	W1222 01:40:44.930533 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:44.930539 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:44.930599 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:44.960159 1685746 cri.go:96] found id: ""
	I1222 01:40:44.960187 1685746 logs.go:282] 0 containers: []
	W1222 01:40:44.960196 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:44.960203 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:44.960308 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:44.985038 1685746 cri.go:96] found id: ""
	I1222 01:40:44.985066 1685746 logs.go:282] 0 containers: []
	W1222 01:40:44.985076 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:44.985083 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:44.985147 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:45.046474 1685746 cri.go:96] found id: ""
	I1222 01:40:45.046501 1685746 logs.go:282] 0 containers: []
	W1222 01:40:45.046511 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:45.046518 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:45.046590 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:45.111231 1685746 cri.go:96] found id: ""
	I1222 01:40:45.111266 1685746 logs.go:282] 0 containers: []
	W1222 01:40:45.111275 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:45.111286 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:45.111299 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:45.180293 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:45.180418 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:40:45.231743 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:45.231786 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:45.318004 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:45.307287    4338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:45.308090    4338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:45.309847    4338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:45.310539    4338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:45.312128    4338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:45.307287    4338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:45.308090    4338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:45.309847    4338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:45.310539    4338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:45.312128    4338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:45.318031 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:45.318045 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:40:45.351434 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:45.351474 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:40:47.885492 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:47.896303 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:47.896380 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:47.927221 1685746 cri.go:96] found id: ""
	I1222 01:40:47.927247 1685746 logs.go:282] 0 containers: []
	W1222 01:40:47.927257 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:47.927264 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:47.927326 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:47.955055 1685746 cri.go:96] found id: ""
	I1222 01:40:47.955082 1685746 logs.go:282] 0 containers: []
	W1222 01:40:47.955091 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:47.955098 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:47.955167 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:47.982730 1685746 cri.go:96] found id: ""
	I1222 01:40:47.982760 1685746 logs.go:282] 0 containers: []
	W1222 01:40:47.982770 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:47.982777 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:47.982841 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:48.013060 1685746 cri.go:96] found id: ""
	I1222 01:40:48.013093 1685746 logs.go:282] 0 containers: []
	W1222 01:40:48.013104 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:48.013111 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:48.013184 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:48.044824 1685746 cri.go:96] found id: ""
	I1222 01:40:48.044902 1685746 logs.go:282] 0 containers: []
	W1222 01:40:48.044918 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:48.044926 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:48.044994 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:48.077777 1685746 cri.go:96] found id: ""
	I1222 01:40:48.077806 1685746 logs.go:282] 0 containers: []
	W1222 01:40:48.077816 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:48.077822 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:48.077887 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:48.108631 1685746 cri.go:96] found id: ""
	I1222 01:40:48.108659 1685746 logs.go:282] 0 containers: []
	W1222 01:40:48.108669 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:48.108676 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:48.108767 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:48.135002 1685746 cri.go:96] found id: ""
	I1222 01:40:48.135035 1685746 logs.go:282] 0 containers: []
	W1222 01:40:48.135045 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:48.135056 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:48.135092 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:48.192262 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:48.192299 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:40:48.207972 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:48.208074 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:48.295537 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:48.286431    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:48.287217    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:48.288971    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:48.289714    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:48.290868    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:48.286431    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:48.287217    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:48.288971    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:48.289714    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:48.290868    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:48.295563 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:48.295583 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:40:48.322629 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:48.322665 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:40:50.857236 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:50.868315 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:50.868396 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:50.894289 1685746 cri.go:96] found id: ""
	I1222 01:40:50.894337 1685746 logs.go:282] 0 containers: []
	W1222 01:40:50.894346 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:50.894353 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:50.894414 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:50.920265 1685746 cri.go:96] found id: ""
	I1222 01:40:50.920288 1685746 logs.go:282] 0 containers: []
	W1222 01:40:50.920297 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:50.920303 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:50.920362 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:50.946413 1685746 cri.go:96] found id: ""
	I1222 01:40:50.946437 1685746 logs.go:282] 0 containers: []
	W1222 01:40:50.946445 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:50.946452 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:50.946511 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:50.973167 1685746 cri.go:96] found id: ""
	I1222 01:40:50.973192 1685746 logs.go:282] 0 containers: []
	W1222 01:40:50.973202 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:50.973209 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:50.973278 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:50.998695 1685746 cri.go:96] found id: ""
	I1222 01:40:50.998730 1685746 logs.go:282] 0 containers: []
	W1222 01:40:50.998739 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:50.998746 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:50.998812 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:51.027679 1685746 cri.go:96] found id: ""
	I1222 01:40:51.027748 1685746 logs.go:282] 0 containers: []
	W1222 01:40:51.027770 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:51.027792 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:51.027882 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:51.057709 1685746 cri.go:96] found id: ""
	I1222 01:40:51.057791 1685746 logs.go:282] 0 containers: []
	W1222 01:40:51.057816 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:51.057839 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:51.057933 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:51.085239 1685746 cri.go:96] found id: ""
	I1222 01:40:51.085311 1685746 logs.go:282] 0 containers: []
	W1222 01:40:51.085335 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:51.085361 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:51.085402 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:51.143088 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:51.143131 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:40:51.159838 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:51.159866 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:51.229894 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:51.221124    4573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:51.221892    4573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:51.223547    4573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:51.224109    4573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:51.225453    4573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:51.221124    4573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:51.221892    4573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:51.223547    4573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:51.224109    4573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:51.225453    4573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:51.229917 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:51.229932 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:40:51.258211 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:51.258321 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:40:53.799763 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:53.811321 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:53.811400 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:53.838808 1685746 cri.go:96] found id: ""
	I1222 01:40:53.838834 1685746 logs.go:282] 0 containers: []
	W1222 01:40:53.838844 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:53.838851 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:53.838918 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:53.865906 1685746 cri.go:96] found id: ""
	I1222 01:40:53.865930 1685746 logs.go:282] 0 containers: []
	W1222 01:40:53.865938 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:53.865945 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:53.866008 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:53.891986 1685746 cri.go:96] found id: ""
	I1222 01:40:53.892030 1685746 logs.go:282] 0 containers: []
	W1222 01:40:53.892040 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:53.892047 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:53.892120 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:53.918633 1685746 cri.go:96] found id: ""
	I1222 01:40:53.918660 1685746 logs.go:282] 0 containers: []
	W1222 01:40:53.918670 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:53.918677 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:53.918748 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:53.945224 1685746 cri.go:96] found id: ""
	I1222 01:40:53.945259 1685746 logs.go:282] 0 containers: []
	W1222 01:40:53.945268 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:53.945274 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:53.945345 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:53.976181 1685746 cri.go:96] found id: ""
	I1222 01:40:53.976207 1685746 logs.go:282] 0 containers: []
	W1222 01:40:53.976216 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:53.976223 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:53.976286 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:54.017529 1685746 cri.go:96] found id: ""
	I1222 01:40:54.017609 1685746 logs.go:282] 0 containers: []
	W1222 01:40:54.017633 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:54.017657 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:54.017766 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:54.050157 1685746 cri.go:96] found id: ""
	I1222 01:40:54.050234 1685746 logs.go:282] 0 containers: []
	W1222 01:40:54.050257 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:54.050284 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:54.050322 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:54.107873 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:54.107911 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:40:54.123115 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:54.123192 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:54.189938 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:54.180462    4689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:54.181036    4689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:54.182810    4689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:54.183522    4689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:54.185321    4689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:54.180462    4689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:54.181036    4689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:54.182810    4689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:54.183522    4689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:54.185321    4689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:54.189963 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:54.189976 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:40:54.216904 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:54.216959 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:40:56.757953 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:56.769647 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:56.769793 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:56.802913 1685746 cri.go:96] found id: ""
	I1222 01:40:56.802941 1685746 logs.go:282] 0 containers: []
	W1222 01:40:56.802951 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:56.802958 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:56.803018 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:56.828625 1685746 cri.go:96] found id: ""
	I1222 01:40:56.828654 1685746 logs.go:282] 0 containers: []
	W1222 01:40:56.828664 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:56.828671 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:56.828734 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:56.853350 1685746 cri.go:96] found id: ""
	I1222 01:40:56.853378 1685746 logs.go:282] 0 containers: []
	W1222 01:40:56.853388 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:56.853394 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:56.853456 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:56.883418 1685746 cri.go:96] found id: ""
	I1222 01:40:56.883443 1685746 logs.go:282] 0 containers: []
	W1222 01:40:56.883458 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:56.883466 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:56.883532 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:56.912769 1685746 cri.go:96] found id: ""
	I1222 01:40:56.912799 1685746 logs.go:282] 0 containers: []
	W1222 01:40:56.912809 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:56.912817 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:56.912880 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:56.938494 1685746 cri.go:96] found id: ""
	I1222 01:40:56.938519 1685746 logs.go:282] 0 containers: []
	W1222 01:40:56.938529 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:56.938536 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:56.938602 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:56.968944 1685746 cri.go:96] found id: ""
	I1222 01:40:56.968978 1685746 logs.go:282] 0 containers: []
	W1222 01:40:56.968987 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:56.968994 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:56.969063 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:56.995238 1685746 cri.go:96] found id: ""
	I1222 01:40:56.995265 1685746 logs.go:282] 0 containers: []
	W1222 01:40:56.995274 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:56.995284 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:56.995295 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:40:57.022601 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:57.022641 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:40:57.055915 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:57.055993 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:57.110958 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:57.110993 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:40:57.126557 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:57.126587 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:57.199192 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:57.188757    4814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:57.189807    4814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:57.191598    4814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:57.192842    4814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:57.194605    4814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:57.188757    4814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:57.189807    4814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:57.191598    4814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:57.192842    4814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:57.194605    4814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:59.699460 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:59.709928 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:59.709999 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:59.734831 1685746 cri.go:96] found id: ""
	I1222 01:40:59.734861 1685746 logs.go:282] 0 containers: []
	W1222 01:40:59.734870 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:59.734876 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:59.734939 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:59.766737 1685746 cri.go:96] found id: ""
	I1222 01:40:59.766765 1685746 logs.go:282] 0 containers: []
	W1222 01:40:59.766773 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:59.766785 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:59.766845 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:59.800714 1685746 cri.go:96] found id: ""
	I1222 01:40:59.800742 1685746 logs.go:282] 0 containers: []
	W1222 01:40:59.800751 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:59.800757 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:59.800817 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:59.828842 1685746 cri.go:96] found id: ""
	I1222 01:40:59.828871 1685746 logs.go:282] 0 containers: []
	W1222 01:40:59.828880 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:59.828888 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:59.828951 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:59.854824 1685746 cri.go:96] found id: ""
	I1222 01:40:59.854848 1685746 logs.go:282] 0 containers: []
	W1222 01:40:59.854857 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:59.854864 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:59.854928 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:59.879691 1685746 cri.go:96] found id: ""
	I1222 01:40:59.879761 1685746 logs.go:282] 0 containers: []
	W1222 01:40:59.879784 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:59.879798 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:59.879874 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:59.905099 1685746 cri.go:96] found id: ""
	I1222 01:40:59.905136 1685746 logs.go:282] 0 containers: []
	W1222 01:40:59.905146 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:59.905152 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:59.905232 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:59.929727 1685746 cri.go:96] found id: ""
	I1222 01:40:59.929763 1685746 logs.go:282] 0 containers: []
	W1222 01:40:59.929775 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:59.929784 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:59.929794 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:59.985430 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:59.985466 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:00.001212 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:00.001238 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:00.267041 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:00.255488    4915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:00.258122    4915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:00.259215    4915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:00.260139    4915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:00.261094    4915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:00.255488    4915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:00.258122    4915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:00.259215    4915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:00.260139    4915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:00.261094    4915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:00.267072 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:00.267085 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:00.299707 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:00.299756 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:41:02.866175 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:02.877065 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:02.877139 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:02.902030 1685746 cri.go:96] found id: ""
	I1222 01:41:02.902137 1685746 logs.go:282] 0 containers: []
	W1222 01:41:02.902161 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:02.902183 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:02.902277 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:02.928023 1685746 cri.go:96] found id: ""
	I1222 01:41:02.928048 1685746 logs.go:282] 0 containers: []
	W1222 01:41:02.928058 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:02.928065 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:02.928128 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:02.958559 1685746 cri.go:96] found id: ""
	I1222 01:41:02.958595 1685746 logs.go:282] 0 containers: []
	W1222 01:41:02.958605 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:02.958612 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:02.958675 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:02.984249 1685746 cri.go:96] found id: ""
	I1222 01:41:02.984272 1685746 logs.go:282] 0 containers: []
	W1222 01:41:02.984281 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:02.984287 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:02.984355 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:03.033125 1685746 cri.go:96] found id: ""
	I1222 01:41:03.033152 1685746 logs.go:282] 0 containers: []
	W1222 01:41:03.033161 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:03.033167 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:03.033228 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:03.058557 1685746 cri.go:96] found id: ""
	I1222 01:41:03.058583 1685746 logs.go:282] 0 containers: []
	W1222 01:41:03.058591 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:03.058598 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:03.058657 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:03.089068 1685746 cri.go:96] found id: ""
	I1222 01:41:03.089112 1685746 logs.go:282] 0 containers: []
	W1222 01:41:03.089122 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:03.089132 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:03.089210 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:03.119177 1685746 cri.go:96] found id: ""
	I1222 01:41:03.119201 1685746 logs.go:282] 0 containers: []
	W1222 01:41:03.119210 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:03.119220 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:03.119231 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:03.182970 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:03.173888    5021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:03.174799    5021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:03.176762    5021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:03.177121    5021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:03.178608    5021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:03.173888    5021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:03.174799    5021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:03.176762    5021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:03.177121    5021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:03.178608    5021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:03.183000 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:03.183013 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:03.207694 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:03.207726 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:41:03.238481 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:03.238559 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:03.311496 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:03.311531 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:05.829656 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:05.840301 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:05.840394 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:05.867057 1685746 cri.go:96] found id: ""
	I1222 01:41:05.867080 1685746 logs.go:282] 0 containers: []
	W1222 01:41:05.867089 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:05.867095 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:05.867155 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:05.897184 1685746 cri.go:96] found id: ""
	I1222 01:41:05.897206 1685746 logs.go:282] 0 containers: []
	W1222 01:41:05.897215 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:05.897221 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:05.897284 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:05.922902 1685746 cri.go:96] found id: ""
	I1222 01:41:05.922924 1685746 logs.go:282] 0 containers: []
	W1222 01:41:05.922933 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:05.922940 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:05.923001 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:05.947567 1685746 cri.go:96] found id: ""
	I1222 01:41:05.947591 1685746 logs.go:282] 0 containers: []
	W1222 01:41:05.947600 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:05.947606 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:05.947725 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:05.973767 1685746 cri.go:96] found id: ""
	I1222 01:41:05.973795 1685746 logs.go:282] 0 containers: []
	W1222 01:41:05.973803 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:05.973810 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:05.973870 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:05.999045 1685746 cri.go:96] found id: ""
	I1222 01:41:05.999075 1685746 logs.go:282] 0 containers: []
	W1222 01:41:05.999084 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:05.999090 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:05.999156 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:06.037292 1685746 cri.go:96] found id: ""
	I1222 01:41:06.037323 1685746 logs.go:282] 0 containers: []
	W1222 01:41:06.037331 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:06.037338 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:06.037403 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:06.063105 1685746 cri.go:96] found id: ""
	I1222 01:41:06.063136 1685746 logs.go:282] 0 containers: []
	W1222 01:41:06.063145 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:06.063155 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:06.063166 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:06.118645 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:06.118682 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:06.134249 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:06.134283 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:06.202948 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:06.194329    5136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:06.194939    5136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:06.196573    5136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:06.197084    5136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:06.198693    5136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:06.194329    5136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:06.194939    5136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:06.196573    5136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:06.197084    5136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:06.198693    5136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:06.202967 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:06.202978 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:06.227736 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:06.227770 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:41:08.763766 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:08.776166 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:08.776292 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:08.802744 1685746 cri.go:96] found id: ""
	I1222 01:41:08.802770 1685746 logs.go:282] 0 containers: []
	W1222 01:41:08.802780 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:08.802787 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:08.802897 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:08.829155 1685746 cri.go:96] found id: ""
	I1222 01:41:08.829196 1685746 logs.go:282] 0 containers: []
	W1222 01:41:08.829205 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:08.829212 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:08.829286 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:08.853323 1685746 cri.go:96] found id: ""
	I1222 01:41:08.853358 1685746 logs.go:282] 0 containers: []
	W1222 01:41:08.853368 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:08.853374 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:08.853442 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:08.878843 1685746 cri.go:96] found id: ""
	I1222 01:41:08.878871 1685746 logs.go:282] 0 containers: []
	W1222 01:41:08.878880 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:08.878887 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:08.878948 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:08.907348 1685746 cri.go:96] found id: ""
	I1222 01:41:08.907374 1685746 logs.go:282] 0 containers: []
	W1222 01:41:08.907383 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:08.907390 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:08.907459 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:08.935980 1685746 cri.go:96] found id: ""
	I1222 01:41:08.936006 1685746 logs.go:282] 0 containers: []
	W1222 01:41:08.936015 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:08.936022 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:08.936103 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:08.965110 1685746 cri.go:96] found id: ""
	I1222 01:41:08.965149 1685746 logs.go:282] 0 containers: []
	W1222 01:41:08.965159 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:08.965165 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:08.965240 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:08.991481 1685746 cri.go:96] found id: ""
	I1222 01:41:08.991509 1685746 logs.go:282] 0 containers: []
	W1222 01:41:08.991518 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:08.991527 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:08.991539 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:09.007297 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:09.007330 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:09.077476 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:09.069572    5248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:09.070224    5248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:09.071715    5248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:09.072134    5248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:09.073635    5248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:09.069572    5248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:09.070224    5248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:09.071715    5248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:09.072134    5248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:09.073635    5248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:09.077557 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:09.077597 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:09.102923 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:09.102958 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:41:09.131422 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:09.131450 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:11.686744 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:11.697606 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:11.697689 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:11.722593 1685746 cri.go:96] found id: ""
	I1222 01:41:11.722664 1685746 logs.go:282] 0 containers: []
	W1222 01:41:11.722686 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:11.722701 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:11.722796 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:11.767413 1685746 cri.go:96] found id: ""
	I1222 01:41:11.767439 1685746 logs.go:282] 0 containers: []
	W1222 01:41:11.767448 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:11.767454 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:11.767526 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:11.800344 1685746 cri.go:96] found id: ""
	I1222 01:41:11.800433 1685746 logs.go:282] 0 containers: []
	W1222 01:41:11.800466 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:11.800487 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:11.800594 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:11.836608 1685746 cri.go:96] found id: ""
	I1222 01:41:11.836693 1685746 logs.go:282] 0 containers: []
	W1222 01:41:11.836717 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:11.836755 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:11.836854 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:11.862781 1685746 cri.go:96] found id: ""
	I1222 01:41:11.862808 1685746 logs.go:282] 0 containers: []
	W1222 01:41:11.862818 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:11.862830 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:11.862894 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:11.891376 1685746 cri.go:96] found id: ""
	I1222 01:41:11.891401 1685746 logs.go:282] 0 containers: []
	W1222 01:41:11.891410 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:11.891416 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:11.891480 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:11.920553 1685746 cri.go:96] found id: ""
	I1222 01:41:11.920581 1685746 logs.go:282] 0 containers: []
	W1222 01:41:11.920590 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:11.920596 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:11.920657 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:11.948610 1685746 cri.go:96] found id: ""
	I1222 01:41:11.948634 1685746 logs.go:282] 0 containers: []
	W1222 01:41:11.948642 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:11.948651 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:11.948662 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:12.006298 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:12.006340 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:12.022860 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:12.022889 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:12.087185 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:12.078758    5361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:12.079176    5361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:12.080965    5361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:12.081458    5361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:12.083372    5361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:12.078758    5361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:12.079176    5361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:12.080965    5361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:12.081458    5361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:12.083372    5361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:12.087252 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:12.087282 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:12.112381 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:12.112415 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:41:14.645175 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:14.655581 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:14.655655 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:14.683086 1685746 cri.go:96] found id: ""
	I1222 01:41:14.683110 1685746 logs.go:282] 0 containers: []
	W1222 01:41:14.683118 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:14.683125 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:14.683192 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:14.708684 1685746 cri.go:96] found id: ""
	I1222 01:41:14.708707 1685746 logs.go:282] 0 containers: []
	W1222 01:41:14.708716 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:14.708723 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:14.708783 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:14.733550 1685746 cri.go:96] found id: ""
	I1222 01:41:14.733572 1685746 logs.go:282] 0 containers: []
	W1222 01:41:14.733580 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:14.733586 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:14.733653 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:14.762029 1685746 cri.go:96] found id: ""
	I1222 01:41:14.762052 1685746 logs.go:282] 0 containers: []
	W1222 01:41:14.762061 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:14.762068 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:14.762191 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:14.802569 1685746 cri.go:96] found id: ""
	I1222 01:41:14.802593 1685746 logs.go:282] 0 containers: []
	W1222 01:41:14.802602 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:14.802609 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:14.802668 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:14.829402 1685746 cri.go:96] found id: ""
	I1222 01:41:14.829425 1685746 logs.go:282] 0 containers: []
	W1222 01:41:14.829434 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:14.829440 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:14.829499 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:14.854254 1685746 cri.go:96] found id: ""
	I1222 01:41:14.854276 1685746 logs.go:282] 0 containers: []
	W1222 01:41:14.854285 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:14.854291 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:14.854350 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:14.879183 1685746 cri.go:96] found id: ""
	I1222 01:41:14.879205 1685746 logs.go:282] 0 containers: []
	W1222 01:41:14.879213 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:14.879222 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:14.879239 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:14.933758 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:14.933795 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:14.948809 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:14.948834 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:15.022478 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:15.005575    5474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:15.006312    5474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:15.008255    5474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:15.009052    5474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:15.011873    5474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:15.005575    5474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:15.006312    5474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:15.008255    5474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:15.009052    5474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:15.011873    5474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:15.022594 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:15.022610 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:15.071291 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:15.071336 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:41:17.608065 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:17.618810 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:17.618881 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:17.643606 1685746 cri.go:96] found id: ""
	I1222 01:41:17.643633 1685746 logs.go:282] 0 containers: []
	W1222 01:41:17.643643 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:17.643650 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:17.643760 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:17.669609 1685746 cri.go:96] found id: ""
	I1222 01:41:17.669639 1685746 logs.go:282] 0 containers: []
	W1222 01:41:17.669649 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:17.669656 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:17.669725 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:17.694910 1685746 cri.go:96] found id: ""
	I1222 01:41:17.694934 1685746 logs.go:282] 0 containers: []
	W1222 01:41:17.694943 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:17.694950 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:17.695009 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:17.721067 1685746 cri.go:96] found id: ""
	I1222 01:41:17.721101 1685746 logs.go:282] 0 containers: []
	W1222 01:41:17.721111 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:17.721118 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:17.721251 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:17.762594 1685746 cri.go:96] found id: ""
	I1222 01:41:17.762669 1685746 logs.go:282] 0 containers: []
	W1222 01:41:17.762691 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:17.762715 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:17.762802 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:17.806835 1685746 cri.go:96] found id: ""
	I1222 01:41:17.806870 1685746 logs.go:282] 0 containers: []
	W1222 01:41:17.806880 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:17.806887 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:17.806964 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:17.837236 1685746 cri.go:96] found id: ""
	I1222 01:41:17.837273 1685746 logs.go:282] 0 containers: []
	W1222 01:41:17.837284 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:17.837291 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:17.837362 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:17.867730 1685746 cri.go:96] found id: ""
	I1222 01:41:17.867802 1685746 logs.go:282] 0 containers: []
	W1222 01:41:17.867825 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:17.867840 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:17.867852 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:17.927517 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:17.927555 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:17.943454 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:17.943484 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:18.012436 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:18.001362    5586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:18.002881    5586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:18.003527    5586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:18.005497    5586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:18.006107    5586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:18.001362    5586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:18.002881    5586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:18.003527    5586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:18.005497    5586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:18.006107    5586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:18.012522 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:18.012553 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:18.040219 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:18.040262 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:41:20.572279 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:20.583193 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:20.583266 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:20.609051 1685746 cri.go:96] found id: ""
	I1222 01:41:20.609075 1685746 logs.go:282] 0 containers: []
	W1222 01:41:20.609083 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:20.609089 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:20.609150 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:20.635365 1685746 cri.go:96] found id: ""
	I1222 01:41:20.635391 1685746 logs.go:282] 0 containers: []
	W1222 01:41:20.635400 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:20.635406 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:20.635470 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:20.664505 1685746 cri.go:96] found id: ""
	I1222 01:41:20.664532 1685746 logs.go:282] 0 containers: []
	W1222 01:41:20.664541 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:20.664547 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:20.664609 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:20.690863 1685746 cri.go:96] found id: ""
	I1222 01:41:20.690887 1685746 logs.go:282] 0 containers: []
	W1222 01:41:20.690904 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:20.690916 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:20.690981 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:20.716167 1685746 cri.go:96] found id: ""
	I1222 01:41:20.716188 1685746 logs.go:282] 0 containers: []
	W1222 01:41:20.716196 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:20.716203 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:20.716262 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:20.758512 1685746 cri.go:96] found id: ""
	I1222 01:41:20.758538 1685746 logs.go:282] 0 containers: []
	W1222 01:41:20.758547 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:20.758554 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:20.758612 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:20.789839 1685746 cri.go:96] found id: ""
	I1222 01:41:20.789866 1685746 logs.go:282] 0 containers: []
	W1222 01:41:20.789875 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:20.789882 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:20.789944 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:20.823216 1685746 cri.go:96] found id: ""
	I1222 01:41:20.823244 1685746 logs.go:282] 0 containers: []
	W1222 01:41:20.823254 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:20.823263 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:20.823275 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:20.878834 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:20.878873 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:20.894375 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:20.894409 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:20.963456 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:20.954882    5700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:20.955717    5700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:20.957444    5700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:20.957756    5700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:20.959270    5700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:20.954882    5700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:20.955717    5700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:20.957444    5700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:20.957756    5700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:20.959270    5700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:20.963479 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:20.963518 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:20.992875 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:20.992916 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:41:23.526237 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:23.540126 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:23.540244 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:23.567806 1685746 cri.go:96] found id: ""
	I1222 01:41:23.567833 1685746 logs.go:282] 0 containers: []
	W1222 01:41:23.567842 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:23.567849 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:23.567915 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:23.594496 1685746 cri.go:96] found id: ""
	I1222 01:41:23.594525 1685746 logs.go:282] 0 containers: []
	W1222 01:41:23.594538 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:23.594546 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:23.594614 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:23.621007 1685746 cri.go:96] found id: ""
	I1222 01:41:23.621034 1685746 logs.go:282] 0 containers: []
	W1222 01:41:23.621043 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:23.621050 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:23.621111 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:23.646829 1685746 cri.go:96] found id: ""
	I1222 01:41:23.646857 1685746 logs.go:282] 0 containers: []
	W1222 01:41:23.646867 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:23.646874 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:23.646941 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:23.672993 1685746 cri.go:96] found id: ""
	I1222 01:41:23.673020 1685746 logs.go:282] 0 containers: []
	W1222 01:41:23.673030 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:23.673036 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:23.673099 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:23.704873 1685746 cri.go:96] found id: ""
	I1222 01:41:23.704901 1685746 logs.go:282] 0 containers: []
	W1222 01:41:23.704910 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:23.704916 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:23.704980 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:23.731220 1685746 cri.go:96] found id: ""
	I1222 01:41:23.731248 1685746 logs.go:282] 0 containers: []
	W1222 01:41:23.731259 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:23.731265 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:23.731330 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:23.769641 1685746 cri.go:96] found id: ""
	I1222 01:41:23.769669 1685746 logs.go:282] 0 containers: []
	W1222 01:41:23.769678 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:23.769687 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:23.769701 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:41:23.811900 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:23.811928 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:23.870851 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:23.870887 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:23.886411 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:23.886488 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:23.954566 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:23.945665    5826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:23.946254    5826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:23.948052    5826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:23.948665    5826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:23.950507    5826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:23.945665    5826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:23.946254    5826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:23.948052    5826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:23.948665    5826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:23.950507    5826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:23.954588 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:23.954602 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:26.483766 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:26.495024 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:26.495100 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:26.521679 1685746 cri.go:96] found id: ""
	I1222 01:41:26.521706 1685746 logs.go:282] 0 containers: []
	W1222 01:41:26.521716 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:26.521723 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:26.521786 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:26.552746 1685746 cri.go:96] found id: ""
	I1222 01:41:26.552773 1685746 logs.go:282] 0 containers: []
	W1222 01:41:26.552782 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:26.552789 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:26.552856 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:26.580045 1685746 cri.go:96] found id: ""
	I1222 01:41:26.580072 1685746 logs.go:282] 0 containers: []
	W1222 01:41:26.580082 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:26.580088 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:26.580151 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:26.606656 1685746 cri.go:96] found id: ""
	I1222 01:41:26.606683 1685746 logs.go:282] 0 containers: []
	W1222 01:41:26.606693 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:26.606700 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:26.606759 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:26.632499 1685746 cri.go:96] found id: ""
	I1222 01:41:26.632539 1685746 logs.go:282] 0 containers: []
	W1222 01:41:26.632548 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:26.632556 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:26.632640 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:26.664045 1685746 cri.go:96] found id: ""
	I1222 01:41:26.664072 1685746 logs.go:282] 0 containers: []
	W1222 01:41:26.664082 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:26.664089 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:26.664172 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:26.689648 1685746 cri.go:96] found id: ""
	I1222 01:41:26.689672 1685746 logs.go:282] 0 containers: []
	W1222 01:41:26.689693 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:26.689704 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:26.689772 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:26.715926 1685746 cri.go:96] found id: ""
	I1222 01:41:26.715949 1685746 logs.go:282] 0 containers: []
	W1222 01:41:26.715958 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:26.715966 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:26.715977 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:26.779696 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:26.779785 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:26.802335 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:26.802412 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:26.866575 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:26.857663    5928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:26.858479    5928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:26.860256    5928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:26.860898    5928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:26.862675    5928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:26.857663    5928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:26.858479    5928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:26.860256    5928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:26.860898    5928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:26.862675    5928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:26.866599 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:26.866613 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:26.893136 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:26.893176 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:41:29.425895 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:29.438488 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:29.438569 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:29.467384 1685746 cri.go:96] found id: ""
	I1222 01:41:29.467415 1685746 logs.go:282] 0 containers: []
	W1222 01:41:29.467426 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:29.467432 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:29.467497 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:29.502253 1685746 cri.go:96] found id: ""
	I1222 01:41:29.502277 1685746 logs.go:282] 0 containers: []
	W1222 01:41:29.502285 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:29.502291 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:29.502351 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:29.538703 1685746 cri.go:96] found id: ""
	I1222 01:41:29.538730 1685746 logs.go:282] 0 containers: []
	W1222 01:41:29.538739 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:29.538747 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:29.538809 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:29.567395 1685746 cri.go:96] found id: ""
	I1222 01:41:29.567422 1685746 logs.go:282] 0 containers: []
	W1222 01:41:29.567431 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:29.567439 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:29.567500 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:29.595415 1685746 cri.go:96] found id: ""
	I1222 01:41:29.595493 1685746 logs.go:282] 0 containers: []
	W1222 01:41:29.595508 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:29.595516 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:29.595583 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:29.622583 1685746 cri.go:96] found id: ""
	I1222 01:41:29.622611 1685746 logs.go:282] 0 containers: []
	W1222 01:41:29.622620 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:29.622627 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:29.622693 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:29.649130 1685746 cri.go:96] found id: ""
	I1222 01:41:29.649156 1685746 logs.go:282] 0 containers: []
	W1222 01:41:29.649166 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:29.649173 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:29.649240 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:29.676205 1685746 cri.go:96] found id: ""
	I1222 01:41:29.676231 1685746 logs.go:282] 0 containers: []
	W1222 01:41:29.676240 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:29.676250 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:29.676279 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:29.731980 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:29.732016 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:29.747474 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:29.747503 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:29.833319 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:29.822964    6042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:29.824836    6042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:29.825723    6042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:29.827546    6042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:29.829272    6042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:29.822964    6042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:29.824836    6042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:29.825723    6042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:29.827546    6042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:29.829272    6042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:29.833342 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:29.833355 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:29.859398 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:29.859432 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:41:32.387755 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:32.398548 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:32.398639 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:32.422848 1685746 cri.go:96] found id: ""
	I1222 01:41:32.422870 1685746 logs.go:282] 0 containers: []
	W1222 01:41:32.422879 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:32.422885 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:32.422976 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:32.448126 1685746 cri.go:96] found id: ""
	I1222 01:41:32.448153 1685746 logs.go:282] 0 containers: []
	W1222 01:41:32.448162 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:32.448171 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:32.448233 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:32.476732 1685746 cri.go:96] found id: ""
	I1222 01:41:32.476769 1685746 logs.go:282] 0 containers: []
	W1222 01:41:32.476779 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:32.476785 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:32.476856 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:32.521856 1685746 cri.go:96] found id: ""
	I1222 01:41:32.521885 1685746 logs.go:282] 0 containers: []
	W1222 01:41:32.521915 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:32.521923 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:32.522010 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:32.559083 1685746 cri.go:96] found id: ""
	I1222 01:41:32.559112 1685746 logs.go:282] 0 containers: []
	W1222 01:41:32.559121 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:32.559128 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:32.559199 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:32.585037 1685746 cri.go:96] found id: ""
	I1222 01:41:32.585066 1685746 logs.go:282] 0 containers: []
	W1222 01:41:32.585076 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:32.585082 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:32.585142 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:32.611094 1685746 cri.go:96] found id: ""
	I1222 01:41:32.611117 1685746 logs.go:282] 0 containers: []
	W1222 01:41:32.611126 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:32.611132 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:32.611200 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:32.636572 1685746 cri.go:96] found id: ""
	I1222 01:41:32.636598 1685746 logs.go:282] 0 containers: []
	W1222 01:41:32.636606 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:32.636614 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:32.636626 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:32.691721 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:32.691756 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:32.706757 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:32.706791 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:32.784203 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:32.775781    6150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:32.776686    6150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:32.778481    6150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:32.778815    6150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:32.780276    6150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:32.775781    6150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:32.776686    6150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:32.778481    6150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:32.778815    6150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:32.780276    6150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:32.784277 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:32.784302 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:32.812067 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:32.812099 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:41:35.344181 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:35.354549 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:35.354621 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:35.378138 1685746 cri.go:96] found id: ""
	I1222 01:41:35.378160 1685746 logs.go:282] 0 containers: []
	W1222 01:41:35.378169 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:35.378177 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:35.378236 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:35.403725 1685746 cri.go:96] found id: ""
	I1222 01:41:35.403748 1685746 logs.go:282] 0 containers: []
	W1222 01:41:35.403757 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:35.403764 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:35.403825 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:35.429025 1685746 cri.go:96] found id: ""
	I1222 01:41:35.429050 1685746 logs.go:282] 0 containers: []
	W1222 01:41:35.429059 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:35.429066 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:35.429129 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:35.459607 1685746 cri.go:96] found id: ""
	I1222 01:41:35.459633 1685746 logs.go:282] 0 containers: []
	W1222 01:41:35.459642 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:35.459649 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:35.459707 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:35.483992 1685746 cri.go:96] found id: ""
	I1222 01:41:35.484015 1685746 logs.go:282] 0 containers: []
	W1222 01:41:35.484024 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:35.484031 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:35.484094 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:35.517254 1685746 cri.go:96] found id: ""
	I1222 01:41:35.517277 1685746 logs.go:282] 0 containers: []
	W1222 01:41:35.517286 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:35.517293 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:35.517353 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:35.546137 1685746 cri.go:96] found id: ""
	I1222 01:41:35.546219 1685746 logs.go:282] 0 containers: []
	W1222 01:41:35.546242 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:35.546284 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:35.546378 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:35.576307 1685746 cri.go:96] found id: ""
	I1222 01:41:35.576329 1685746 logs.go:282] 0 containers: []
	W1222 01:41:35.576338 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:35.576347 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:35.576358 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:35.631853 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:35.631887 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:35.646787 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:35.646827 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:35.713895 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:35.705676    6260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:35.706509    6260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:35.708048    6260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:35.708614    6260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:35.710294    6260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:35.705676    6260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:35.706509    6260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:35.708048    6260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:35.708614    6260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:35.710294    6260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:35.713927 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:35.713943 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:35.739168 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:35.739250 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:41:38.278358 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:38.289460 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:38.289534 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:38.316292 1685746 cri.go:96] found id: ""
	I1222 01:41:38.316320 1685746 logs.go:282] 0 containers: []
	W1222 01:41:38.316329 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:38.316336 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:38.316416 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:38.344932 1685746 cri.go:96] found id: ""
	I1222 01:41:38.344960 1685746 logs.go:282] 0 containers: []
	W1222 01:41:38.344969 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:38.344976 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:38.345038 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:38.371484 1685746 cri.go:96] found id: ""
	I1222 01:41:38.371509 1685746 logs.go:282] 0 containers: []
	W1222 01:41:38.371519 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:38.371525 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:38.371594 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:38.401114 1685746 cri.go:96] found id: ""
	I1222 01:41:38.401140 1685746 logs.go:282] 0 containers: []
	W1222 01:41:38.401149 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:38.401157 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:38.401217 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:38.427857 1685746 cri.go:96] found id: ""
	I1222 01:41:38.427881 1685746 logs.go:282] 0 containers: []
	W1222 01:41:38.427890 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:38.427897 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:38.427962 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:38.453333 1685746 cri.go:96] found id: ""
	I1222 01:41:38.453358 1685746 logs.go:282] 0 containers: []
	W1222 01:41:38.453367 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:38.453374 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:38.453455 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:38.477527 1685746 cri.go:96] found id: ""
	I1222 01:41:38.477610 1685746 logs.go:282] 0 containers: []
	W1222 01:41:38.477633 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:38.477655 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:38.477748 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:38.523741 1685746 cri.go:96] found id: ""
	I1222 01:41:38.523763 1685746 logs.go:282] 0 containers: []
	W1222 01:41:38.523772 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:38.523787 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:38.523798 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:38.595469 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:38.587236    6363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:38.587861    6363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:38.589425    6363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:38.589886    6363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:38.591514    6363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:38.587236    6363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:38.587861    6363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:38.589425    6363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:38.589886    6363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:38.591514    6363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:38.595491 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:38.595508 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:38.621769 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:38.621808 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:41:38.651477 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:38.651507 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:38.710896 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:38.710934 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:41.227040 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:41.237881 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:41.237954 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:41.265636 1685746 cri.go:96] found id: ""
	I1222 01:41:41.265671 1685746 logs.go:282] 0 containers: []
	W1222 01:41:41.265680 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:41.265687 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:41.265757 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:41.291304 1685746 cri.go:96] found id: ""
	I1222 01:41:41.291330 1685746 logs.go:282] 0 containers: []
	W1222 01:41:41.291339 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:41.291346 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:41.291414 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:41.316968 1685746 cri.go:96] found id: ""
	I1222 01:41:41.317003 1685746 logs.go:282] 0 containers: []
	W1222 01:41:41.317013 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:41.317020 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:41.317094 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:41.342750 1685746 cri.go:96] found id: ""
	I1222 01:41:41.342779 1685746 logs.go:282] 0 containers: []
	W1222 01:41:41.342794 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:41.342801 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:41.342865 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:41.368173 1685746 cri.go:96] found id: ""
	I1222 01:41:41.368197 1685746 logs.go:282] 0 containers: []
	W1222 01:41:41.368205 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:41.368212 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:41.368275 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:41.396263 1685746 cri.go:96] found id: ""
	I1222 01:41:41.396290 1685746 logs.go:282] 0 containers: []
	W1222 01:41:41.396300 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:41.396308 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:41.396380 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:41.424002 1685746 cri.go:96] found id: ""
	I1222 01:41:41.424028 1685746 logs.go:282] 0 containers: []
	W1222 01:41:41.424037 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:41.424044 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:41.424104 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:41.450858 1685746 cri.go:96] found id: ""
	I1222 01:41:41.450886 1685746 logs.go:282] 0 containers: []
	W1222 01:41:41.450894 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:41.450904 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:41.450915 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:41.510703 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:41.510785 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:41.529398 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:41.529475 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:41.596968 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:41.588901    6483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:41.589540    6483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:41.591264    6483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:41.591828    6483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:41.593389    6483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:41.588901    6483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:41.589540    6483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:41.591264    6483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:41.591828    6483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:41.593389    6483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:41.596989 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:41.597002 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:41.623436 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:41.623472 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:41:44.153585 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:44.164792 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:44.164865 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:44.190259 1685746 cri.go:96] found id: ""
	I1222 01:41:44.190282 1685746 logs.go:282] 0 containers: []
	W1222 01:41:44.190290 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:44.190297 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:44.190357 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:44.223886 1685746 cri.go:96] found id: ""
	I1222 01:41:44.223911 1685746 logs.go:282] 0 containers: []
	W1222 01:41:44.223922 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:44.223929 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:44.223988 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:44.249898 1685746 cri.go:96] found id: ""
	I1222 01:41:44.249922 1685746 logs.go:282] 0 containers: []
	W1222 01:41:44.249931 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:44.249948 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:44.250010 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:44.275190 1685746 cri.go:96] found id: ""
	I1222 01:41:44.275217 1685746 logs.go:282] 0 containers: []
	W1222 01:41:44.275227 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:44.275233 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:44.275325 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:44.301198 1685746 cri.go:96] found id: ""
	I1222 01:41:44.301221 1685746 logs.go:282] 0 containers: []
	W1222 01:41:44.301230 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:44.301237 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:44.301311 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:44.325952 1685746 cri.go:96] found id: ""
	I1222 01:41:44.325990 1685746 logs.go:282] 0 containers: []
	W1222 01:41:44.326000 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:44.326023 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:44.326154 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:44.352189 1685746 cri.go:96] found id: ""
	I1222 01:41:44.352227 1685746 logs.go:282] 0 containers: []
	W1222 01:41:44.352236 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:44.352259 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:44.352334 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:44.377820 1685746 cri.go:96] found id: ""
	I1222 01:41:44.377848 1685746 logs.go:282] 0 containers: []
	W1222 01:41:44.377858 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:44.377868 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:44.377879 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:44.393230 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:44.393258 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:44.463151 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:44.454750    6594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:44.455259    6594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:44.456851    6594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:44.457499    6594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:44.458487    6594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:44.454750    6594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:44.455259    6594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:44.456851    6594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:44.457499    6594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:44.458487    6594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:44.463175 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:44.463188 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:44.488611 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:44.488690 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:41:44.523935 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:44.524011 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:47.091277 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:47.102299 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:47.102374 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:47.128309 1685746 cri.go:96] found id: ""
	I1222 01:41:47.128334 1685746 logs.go:282] 0 containers: []
	W1222 01:41:47.128344 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:47.128351 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:47.128431 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:47.154429 1685746 cri.go:96] found id: ""
	I1222 01:41:47.154456 1685746 logs.go:282] 0 containers: []
	W1222 01:41:47.154465 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:47.154473 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:47.154535 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:47.179829 1685746 cri.go:96] found id: ""
	I1222 01:41:47.179856 1685746 logs.go:282] 0 containers: []
	W1222 01:41:47.179865 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:47.179872 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:47.179933 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:47.204965 1685746 cri.go:96] found id: ""
	I1222 01:41:47.204999 1685746 logs.go:282] 0 containers: []
	W1222 01:41:47.205009 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:47.205016 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:47.205088 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:47.231912 1685746 cri.go:96] found id: ""
	I1222 01:41:47.231939 1685746 logs.go:282] 0 containers: []
	W1222 01:41:47.231949 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:47.231955 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:47.232043 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:47.262187 1685746 cri.go:96] found id: ""
	I1222 01:41:47.262215 1685746 logs.go:282] 0 containers: []
	W1222 01:41:47.262230 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:47.262237 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:47.262301 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:47.287536 1685746 cri.go:96] found id: ""
	I1222 01:41:47.287567 1685746 logs.go:282] 0 containers: []
	W1222 01:41:47.287577 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:47.287583 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:47.287648 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:47.313516 1685746 cri.go:96] found id: ""
	I1222 01:41:47.313544 1685746 logs.go:282] 0 containers: []
	W1222 01:41:47.313553 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:47.313563 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:47.313573 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:47.369295 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:47.369329 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:47.387169 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:47.387197 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:47.455311 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:47.445096    6709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:47.445837    6709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:47.447629    6709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:47.448117    6709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:47.449732    6709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:47.445096    6709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:47.445837    6709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:47.447629    6709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:47.448117    6709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:47.449732    6709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:47.455335 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:47.455347 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:47.481041 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:47.481078 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:41:50.030868 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:50.043616 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:50.043692 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:50.072180 1685746 cri.go:96] found id: ""
	I1222 01:41:50.072210 1685746 logs.go:282] 0 containers: []
	W1222 01:41:50.072220 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:50.072229 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:50.072297 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:50.100979 1685746 cri.go:96] found id: ""
	I1222 01:41:50.101005 1685746 logs.go:282] 0 containers: []
	W1222 01:41:50.101014 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:50.101021 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:50.101091 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:50.128360 1685746 cri.go:96] found id: ""
	I1222 01:41:50.128392 1685746 logs.go:282] 0 containers: []
	W1222 01:41:50.128404 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:50.128411 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:50.128476 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:50.154912 1685746 cri.go:96] found id: ""
	I1222 01:41:50.154945 1685746 logs.go:282] 0 containers: []
	W1222 01:41:50.154955 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:50.154963 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:50.155033 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:50.181433 1685746 cri.go:96] found id: ""
	I1222 01:41:50.181465 1685746 logs.go:282] 0 containers: []
	W1222 01:41:50.181474 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:50.181483 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:50.181553 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:50.207260 1685746 cri.go:96] found id: ""
	I1222 01:41:50.207289 1685746 logs.go:282] 0 containers: []
	W1222 01:41:50.207299 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:50.207305 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:50.207366 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:50.234601 1685746 cri.go:96] found id: ""
	I1222 01:41:50.234649 1685746 logs.go:282] 0 containers: []
	W1222 01:41:50.234659 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:50.234666 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:50.234744 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:50.264579 1685746 cri.go:96] found id: ""
	I1222 01:41:50.264621 1685746 logs.go:282] 0 containers: []
	W1222 01:41:50.264631 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:50.264641 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:50.264661 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:50.321078 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:50.321112 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:50.336044 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:50.336069 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:50.401373 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:50.392422    6821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:50.393059    6821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:50.394644    6821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:50.395244    6821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:50.396998    6821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:50.392422    6821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:50.393059    6821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:50.394644    6821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:50.395244    6821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:50.396998    6821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:50.401396 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:50.401410 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:50.428108 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:50.428151 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:41:52.958393 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:52.969793 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:52.969867 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:53.021307 1685746 cri.go:96] found id: ""
	I1222 01:41:53.021331 1685746 logs.go:282] 0 containers: []
	W1222 01:41:53.021340 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:53.021352 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:53.021415 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:53.053765 1685746 cri.go:96] found id: ""
	I1222 01:41:53.053789 1685746 logs.go:282] 0 containers: []
	W1222 01:41:53.053798 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:53.053804 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:53.053872 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:53.079107 1685746 cri.go:96] found id: ""
	I1222 01:41:53.079135 1685746 logs.go:282] 0 containers: []
	W1222 01:41:53.079144 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:53.079152 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:53.079214 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:53.106101 1685746 cri.go:96] found id: ""
	I1222 01:41:53.106130 1685746 logs.go:282] 0 containers: []
	W1222 01:41:53.106138 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:53.106145 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:53.106209 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:53.135616 1685746 cri.go:96] found id: ""
	I1222 01:41:53.135643 1685746 logs.go:282] 0 containers: []
	W1222 01:41:53.135652 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:53.135659 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:53.135766 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:53.160318 1685746 cri.go:96] found id: ""
	I1222 01:41:53.160344 1685746 logs.go:282] 0 containers: []
	W1222 01:41:53.160353 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:53.160360 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:53.160451 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:53.185257 1685746 cri.go:96] found id: ""
	I1222 01:41:53.185297 1685746 logs.go:282] 0 containers: []
	W1222 01:41:53.185306 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:53.185313 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:53.185401 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:53.210753 1685746 cri.go:96] found id: ""
	I1222 01:41:53.210824 1685746 logs.go:282] 0 containers: []
	W1222 01:41:53.210839 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:53.210855 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:53.210867 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:53.237290 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:53.237323 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:41:53.267342 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:53.267374 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:53.323394 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:53.323429 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:53.339435 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:53.339465 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:53.403286 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:53.395297    6950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:53.395794    6950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:53.397429    6950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:53.397875    6950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:53.399414    6950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:53.395297    6950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:53.395794    6950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:53.397429    6950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:53.397875    6950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:53.399414    6950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:55.903619 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:55.914760 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:55.914836 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:55.939507 1685746 cri.go:96] found id: ""
	I1222 01:41:55.939532 1685746 logs.go:282] 0 containers: []
	W1222 01:41:55.939541 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:55.939548 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:55.939614 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:55.965607 1685746 cri.go:96] found id: ""
	I1222 01:41:55.965633 1685746 logs.go:282] 0 containers: []
	W1222 01:41:55.965643 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:55.965649 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:55.965715 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:56.006138 1685746 cri.go:96] found id: ""
	I1222 01:41:56.006171 1685746 logs.go:282] 0 containers: []
	W1222 01:41:56.006181 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:56.006188 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:56.006256 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:56.040087 1685746 cri.go:96] found id: ""
	I1222 01:41:56.040116 1685746 logs.go:282] 0 containers: []
	W1222 01:41:56.040125 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:56.040131 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:56.040191 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:56.068695 1685746 cri.go:96] found id: ""
	I1222 01:41:56.068719 1685746 logs.go:282] 0 containers: []
	W1222 01:41:56.068727 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:56.068734 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:56.068795 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:56.096726 1685746 cri.go:96] found id: ""
	I1222 01:41:56.096808 1685746 logs.go:282] 0 containers: []
	W1222 01:41:56.096832 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:56.096854 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:56.096963 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:56.125548 1685746 cri.go:96] found id: ""
	I1222 01:41:56.125627 1685746 logs.go:282] 0 containers: []
	W1222 01:41:56.125652 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:56.125675 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:56.125763 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:56.150956 1685746 cri.go:96] found id: ""
	I1222 01:41:56.150986 1685746 logs.go:282] 0 containers: []
	W1222 01:41:56.150995 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:56.151005 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:56.151049 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:56.216560 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:56.208477    7045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:56.209149    7045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:56.210844    7045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:56.211410    7045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:56.212923    7045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:56.208477    7045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:56.209149    7045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:56.210844    7045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:56.211410    7045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:56.212923    7045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:56.216581 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:56.216594 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:56.242334 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:56.242368 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:41:56.270763 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:56.270793 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:56.325996 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:56.326038 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:58.841618 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:58.852321 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:58.852411 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:58.877439 1685746 cri.go:96] found id: ""
	I1222 01:41:58.877465 1685746 logs.go:282] 0 containers: []
	W1222 01:41:58.877475 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:58.877482 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:58.877542 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:58.902343 1685746 cri.go:96] found id: ""
	I1222 01:41:58.902369 1685746 logs.go:282] 0 containers: []
	W1222 01:41:58.902378 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:58.902385 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:58.902443 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:58.927733 1685746 cri.go:96] found id: ""
	I1222 01:41:58.927758 1685746 logs.go:282] 0 containers: []
	W1222 01:41:58.927767 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:58.927774 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:58.927834 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:58.954349 1685746 cri.go:96] found id: ""
	I1222 01:41:58.954374 1685746 logs.go:282] 0 containers: []
	W1222 01:41:58.954384 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:58.954391 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:58.954464 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:58.984449 1685746 cri.go:96] found id: ""
	I1222 01:41:58.984519 1685746 logs.go:282] 0 containers: []
	W1222 01:41:58.984533 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:58.984541 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:58.984612 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:59.020245 1685746 cri.go:96] found id: ""
	I1222 01:41:59.020277 1685746 logs.go:282] 0 containers: []
	W1222 01:41:59.020294 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:59.020303 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:59.020387 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:59.059067 1685746 cri.go:96] found id: ""
	I1222 01:41:59.059135 1685746 logs.go:282] 0 containers: []
	W1222 01:41:59.059157 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:59.059170 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:59.059244 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:59.090327 1685746 cri.go:96] found id: ""
	I1222 01:41:59.090355 1685746 logs.go:282] 0 containers: []
	W1222 01:41:59.090364 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:59.090372 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:59.090384 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:59.149768 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:59.149809 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:59.164825 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:59.164857 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:59.232698 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:59.223745    7163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:59.224279    7163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:59.226992    7163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:59.227434    7163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:59.228967    7163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:59.223745    7163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:59.224279    7163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:59.226992    7163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:59.227434    7163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:59.228967    7163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:59.232720 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:59.232734 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:59.258805 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:59.258840 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:42:01.787611 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:42:01.799088 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:42:01.799206 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:42:01.829442 1685746 cri.go:96] found id: ""
	I1222 01:42:01.829521 1685746 logs.go:282] 0 containers: []
	W1222 01:42:01.829543 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:42:01.829566 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:42:01.829657 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:42:01.856095 1685746 cri.go:96] found id: ""
	I1222 01:42:01.856122 1685746 logs.go:282] 0 containers: []
	W1222 01:42:01.856132 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:42:01.856139 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:42:01.856203 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:42:01.882443 1685746 cri.go:96] found id: ""
	I1222 01:42:01.882469 1685746 logs.go:282] 0 containers: []
	W1222 01:42:01.882478 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:42:01.882485 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:42:01.882549 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:42:01.908008 1685746 cri.go:96] found id: ""
	I1222 01:42:01.908033 1685746 logs.go:282] 0 containers: []
	W1222 01:42:01.908043 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:42:01.908049 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:42:01.908111 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:42:01.934350 1685746 cri.go:96] found id: ""
	I1222 01:42:01.934377 1685746 logs.go:282] 0 containers: []
	W1222 01:42:01.934386 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:42:01.934393 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:42:01.934457 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:42:01.960407 1685746 cri.go:96] found id: ""
	I1222 01:42:01.960433 1685746 logs.go:282] 0 containers: []
	W1222 01:42:01.960442 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:42:01.960449 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:42:01.960512 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:42:01.988879 1685746 cri.go:96] found id: ""
	I1222 01:42:01.988915 1685746 logs.go:282] 0 containers: []
	W1222 01:42:01.988925 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:42:01.988931 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:42:01.989000 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:42:02.021404 1685746 cri.go:96] found id: ""
	I1222 01:42:02.021444 1685746 logs.go:282] 0 containers: []
	W1222 01:42:02.021454 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:42:02.021464 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:42:02.021476 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:42:02.053252 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:42:02.053282 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:42:02.111509 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:42:02.111548 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:42:02.127002 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:42:02.127081 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:42:02.196408 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:42:02.188126    7282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:02.188871    7282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:02.190476    7282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:02.190840    7282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:02.192032    7282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:42:02.188126    7282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:02.188871    7282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:02.190476    7282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:02.190840    7282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:02.192032    7282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:42:02.196429 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:42:02.196442 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:42:04.723107 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:42:04.734699 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:42:04.734786 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:42:04.771439 1685746 cri.go:96] found id: ""
	I1222 01:42:04.771462 1685746 logs.go:282] 0 containers: []
	W1222 01:42:04.771471 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:42:04.771477 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:42:04.771540 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:42:04.806612 1685746 cri.go:96] found id: ""
	I1222 01:42:04.806639 1685746 logs.go:282] 0 containers: []
	W1222 01:42:04.806648 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:42:04.806655 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:42:04.806714 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:42:04.832290 1685746 cri.go:96] found id: ""
	I1222 01:42:04.832320 1685746 logs.go:282] 0 containers: []
	W1222 01:42:04.832329 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:42:04.832336 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:42:04.832404 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:42:04.860422 1685746 cri.go:96] found id: ""
	I1222 01:42:04.860460 1685746 logs.go:282] 0 containers: []
	W1222 01:42:04.860469 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:42:04.860494 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:42:04.860603 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:42:04.885397 1685746 cri.go:96] found id: ""
	I1222 01:42:04.885424 1685746 logs.go:282] 0 containers: []
	W1222 01:42:04.885433 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:42:04.885440 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:42:04.885524 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:42:04.910499 1685746 cri.go:96] found id: ""
	I1222 01:42:04.910529 1685746 logs.go:282] 0 containers: []
	W1222 01:42:04.910539 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:42:04.910546 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:42:04.910607 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:42:04.934849 1685746 cri.go:96] found id: ""
	I1222 01:42:04.934887 1685746 logs.go:282] 0 containers: []
	W1222 01:42:04.934897 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:42:04.934921 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:42:04.935013 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:42:04.964384 1685746 cri.go:96] found id: ""
	I1222 01:42:04.964411 1685746 logs.go:282] 0 containers: []
	W1222 01:42:04.964420 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:42:04.964429 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:42:04.964460 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:42:05.023249 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:42:05.023347 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:42:05.042677 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:42:05.042702 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:42:05.113125 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:42:05.104084    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:05.104668    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:05.106408    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:05.106980    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:05.108789    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:42:05.104084    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:05.104668    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:05.106408    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:05.106980    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:05.108789    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:42:05.113151 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:42:05.113167 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:42:05.139072 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:42:05.139109 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:42:07.672253 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:42:07.683433 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:42:07.683523 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:42:07.710000 1685746 cri.go:96] found id: ""
	I1222 01:42:07.710025 1685746 logs.go:282] 0 containers: []
	W1222 01:42:07.710033 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:42:07.710040 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:42:07.710129 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:42:07.749657 1685746 cri.go:96] found id: ""
	I1222 01:42:07.749685 1685746 logs.go:282] 0 containers: []
	W1222 01:42:07.749695 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:42:07.749702 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:42:07.749769 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:42:07.779817 1685746 cri.go:96] found id: ""
	I1222 01:42:07.779844 1685746 logs.go:282] 0 containers: []
	W1222 01:42:07.779853 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:42:07.779860 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:42:07.779920 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:42:07.809501 1685746 cri.go:96] found id: ""
	I1222 01:42:07.809529 1685746 logs.go:282] 0 containers: []
	W1222 01:42:07.809538 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:42:07.809546 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:42:07.809606 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:42:07.834291 1685746 cri.go:96] found id: ""
	I1222 01:42:07.834318 1685746 logs.go:282] 0 containers: []
	W1222 01:42:07.834327 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:42:07.834334 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:42:07.834395 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:42:07.859724 1685746 cri.go:96] found id: ""
	I1222 01:42:07.859791 1685746 logs.go:282] 0 containers: []
	W1222 01:42:07.859807 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:42:07.859814 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:42:07.859874 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:42:07.891259 1685746 cri.go:96] found id: ""
	I1222 01:42:07.891287 1685746 logs.go:282] 0 containers: []
	W1222 01:42:07.891296 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:42:07.891303 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:42:07.891362 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:42:07.916371 1685746 cri.go:96] found id: ""
	I1222 01:42:07.916451 1685746 logs.go:282] 0 containers: []
	W1222 01:42:07.916467 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:42:07.916477 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:42:07.916489 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:42:07.943955 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:42:07.943981 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:42:08.000957 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:42:08.001003 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:42:08.021265 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:42:08.021299 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:42:08.098699 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:42:08.089767    7510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:08.090657    7510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:08.092419    7510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:08.093123    7510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:08.094829    7510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:42:08.089767    7510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:08.090657    7510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:08.092419    7510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:08.093123    7510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:08.094829    7510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:42:08.098725 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:42:08.098739 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:42:10.625986 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:42:10.637185 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:42:10.637275 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:42:10.663011 1685746 cri.go:96] found id: ""
	I1222 01:42:10.663039 1685746 logs.go:282] 0 containers: []
	W1222 01:42:10.663048 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:42:10.663055 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:42:10.663121 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:42:10.689593 1685746 cri.go:96] found id: ""
	I1222 01:42:10.689623 1685746 logs.go:282] 0 containers: []
	W1222 01:42:10.689633 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:42:10.689639 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:42:10.689704 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:42:10.718520 1685746 cri.go:96] found id: ""
	I1222 01:42:10.718545 1685746 logs.go:282] 0 containers: []
	W1222 01:42:10.718554 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:42:10.718561 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:42:10.718627 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:42:10.748796 1685746 cri.go:96] found id: ""
	I1222 01:42:10.748829 1685746 logs.go:282] 0 containers: []
	W1222 01:42:10.748839 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:42:10.748846 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:42:10.748919 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:42:10.780456 1685746 cri.go:96] found id: ""
	I1222 01:42:10.780493 1685746 logs.go:282] 0 containers: []
	W1222 01:42:10.780508 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:42:10.780515 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:42:10.780591 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:42:10.810196 1685746 cri.go:96] found id: ""
	I1222 01:42:10.810234 1685746 logs.go:282] 0 containers: []
	W1222 01:42:10.810243 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:42:10.810250 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:42:10.810346 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:42:10.836475 1685746 cri.go:96] found id: ""
	I1222 01:42:10.836502 1685746 logs.go:282] 0 containers: []
	W1222 01:42:10.836511 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:42:10.836518 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:42:10.836582 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:42:10.862222 1685746 cri.go:96] found id: ""
	I1222 01:42:10.862246 1685746 logs.go:282] 0 containers: []
	W1222 01:42:10.862255 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:42:10.862264 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:42:10.862275 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:42:10.918613 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:42:10.918648 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:42:10.933449 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:42:10.933478 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:42:11.013628 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:42:11.003467    7604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:11.004191    7604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:11.006270    7604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:11.007253    7604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:11.009218    7604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:42:11.003467    7604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:11.004191    7604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:11.006270    7604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:11.007253    7604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:11.009218    7604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:42:11.013706 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:42:11.013738 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:42:11.042713 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:42:11.042803 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:42:13.581897 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:42:13.592897 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:42:13.592969 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:42:13.621158 1685746 cri.go:96] found id: ""
	I1222 01:42:13.621184 1685746 logs.go:282] 0 containers: []
	W1222 01:42:13.621194 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:42:13.621200 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:42:13.621265 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:42:13.646742 1685746 cri.go:96] found id: ""
	I1222 01:42:13.646769 1685746 logs.go:282] 0 containers: []
	W1222 01:42:13.646778 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:42:13.646784 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:42:13.646843 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:42:13.671981 1685746 cri.go:96] found id: ""
	I1222 01:42:13.672014 1685746 logs.go:282] 0 containers: []
	W1222 01:42:13.672023 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:42:13.672030 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:42:13.672093 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:42:13.697359 1685746 cri.go:96] found id: ""
	I1222 01:42:13.697387 1685746 logs.go:282] 0 containers: []
	W1222 01:42:13.697397 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:42:13.697408 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:42:13.697471 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:42:13.723455 1685746 cri.go:96] found id: ""
	I1222 01:42:13.723481 1685746 logs.go:282] 0 containers: []
	W1222 01:42:13.723491 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:42:13.723499 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:42:13.723560 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:42:13.762227 1685746 cri.go:96] found id: ""
	I1222 01:42:13.762251 1685746 logs.go:282] 0 containers: []
	W1222 01:42:13.762259 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:42:13.762266 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:42:13.762325 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:42:13.792416 1685746 cri.go:96] found id: ""
	I1222 01:42:13.792440 1685746 logs.go:282] 0 containers: []
	W1222 01:42:13.792448 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:42:13.792455 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:42:13.792521 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:42:13.824151 1685746 cri.go:96] found id: ""
	I1222 01:42:13.824178 1685746 logs.go:282] 0 containers: []
	W1222 01:42:13.824188 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:42:13.824227 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:42:13.824251 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:42:13.839610 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:42:13.839639 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:42:13.903103 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:42:13.894591    7715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:13.895392    7715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:13.896942    7715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:13.897426    7715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:13.898945    7715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:42:13.894591    7715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:13.895392    7715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:13.896942    7715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:13.897426    7715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:13.898945    7715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:42:13.903125 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:42:13.903138 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:42:13.928958 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:42:13.928992 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:42:13.959685 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:42:13.959714 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:42:16.518219 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:42:16.529223 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:42:16.529294 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:42:16.555927 1685746 cri.go:96] found id: ""
	I1222 01:42:16.555953 1685746 logs.go:282] 0 containers: []
	W1222 01:42:16.555962 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:42:16.555969 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:42:16.556028 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:42:16.581196 1685746 cri.go:96] found id: ""
	I1222 01:42:16.581223 1685746 logs.go:282] 0 containers: []
	W1222 01:42:16.581233 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:42:16.581240 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:42:16.581303 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:42:16.607543 1685746 cri.go:96] found id: ""
	I1222 01:42:16.607569 1685746 logs.go:282] 0 containers: []
	W1222 01:42:16.607578 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:42:16.607585 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:42:16.607651 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:42:16.637077 1685746 cri.go:96] found id: ""
	I1222 01:42:16.637106 1685746 logs.go:282] 0 containers: []
	W1222 01:42:16.637116 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:42:16.637123 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:42:16.637183 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:42:16.662155 1685746 cri.go:96] found id: ""
	I1222 01:42:16.662178 1685746 logs.go:282] 0 containers: []
	W1222 01:42:16.662187 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:42:16.662193 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:42:16.662257 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:42:16.694483 1685746 cri.go:96] found id: ""
	I1222 01:42:16.694507 1685746 logs.go:282] 0 containers: []
	W1222 01:42:16.694516 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:42:16.694523 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:42:16.694582 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:42:16.719153 1685746 cri.go:96] found id: ""
	I1222 01:42:16.719178 1685746 logs.go:282] 0 containers: []
	W1222 01:42:16.719188 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:42:16.719195 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:42:16.719258 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:42:16.750982 1685746 cri.go:96] found id: ""
	I1222 01:42:16.751007 1685746 logs.go:282] 0 containers: []
	W1222 01:42:16.751017 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:42:16.751026 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:42:16.751038 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:42:16.809848 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:42:16.809888 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:42:16.828821 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:42:16.828852 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:42:16.896032 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:42:16.888151    7829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:16.888893    7829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:16.890527    7829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:16.890859    7829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:16.892403    7829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:42:16.888151    7829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:16.888893    7829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:16.890527    7829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:16.890859    7829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:16.892403    7829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:42:16.896058 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:42:16.896071 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:42:16.921650 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:42:16.921686 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:42:19.450391 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:42:19.461241 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:42:19.461314 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:42:19.488679 1685746 cri.go:96] found id: ""
	I1222 01:42:19.488705 1685746 logs.go:282] 0 containers: []
	W1222 01:42:19.488715 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:42:19.488722 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:42:19.488784 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:42:19.514947 1685746 cri.go:96] found id: ""
	I1222 01:42:19.514972 1685746 logs.go:282] 0 containers: []
	W1222 01:42:19.514982 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:42:19.514989 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:42:19.515051 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:42:19.541761 1685746 cri.go:96] found id: ""
	I1222 01:42:19.541786 1685746 logs.go:282] 0 containers: []
	W1222 01:42:19.541795 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:42:19.541802 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:42:19.541867 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:42:19.566418 1685746 cri.go:96] found id: ""
	I1222 01:42:19.566441 1685746 logs.go:282] 0 containers: []
	W1222 01:42:19.566450 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:42:19.566456 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:42:19.566515 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:42:19.591707 1685746 cri.go:96] found id: ""
	I1222 01:42:19.591739 1685746 logs.go:282] 0 containers: []
	W1222 01:42:19.591748 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:42:19.591754 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:42:19.591857 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:42:19.618308 1685746 cri.go:96] found id: ""
	I1222 01:42:19.618343 1685746 logs.go:282] 0 containers: []
	W1222 01:42:19.618352 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:42:19.618362 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:42:19.618441 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:42:19.644750 1685746 cri.go:96] found id: ""
	I1222 01:42:19.644791 1685746 logs.go:282] 0 containers: []
	W1222 01:42:19.644801 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:42:19.644808 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:42:19.644883 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:42:19.674267 1685746 cri.go:96] found id: ""
	I1222 01:42:19.674295 1685746 logs.go:282] 0 containers: []
	W1222 01:42:19.674304 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:42:19.674315 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:42:19.674327 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:42:19.689360 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:42:19.689445 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:42:19.766188 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:42:19.757030    7943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:19.758928    7943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:19.760510    7943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:19.760846    7943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:19.762319    7943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:42:19.757030    7943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:19.758928    7943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:19.760510    7943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:19.760846    7943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:19.762319    7943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:42:19.766263 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:42:19.766290 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:42:19.793580 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:42:19.793657 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:42:19.829853 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:42:19.829884 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:42:22.388471 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:42:22.399089 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:42:22.399192 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:42:22.428498 1685746 cri.go:96] found id: ""
	I1222 01:42:22.428569 1685746 logs.go:282] 0 containers: []
	W1222 01:42:22.428583 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:42:22.428591 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:42:22.428672 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:42:22.458145 1685746 cri.go:96] found id: ""
	I1222 01:42:22.458182 1685746 logs.go:282] 0 containers: []
	W1222 01:42:22.458196 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:42:22.458203 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:42:22.458276 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:42:22.485165 1685746 cri.go:96] found id: ""
	I1222 01:42:22.485202 1685746 logs.go:282] 0 containers: []
	W1222 01:42:22.485212 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:42:22.485218 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:42:22.485283 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:42:22.510263 1685746 cri.go:96] found id: ""
	I1222 01:42:22.510292 1685746 logs.go:282] 0 containers: []
	W1222 01:42:22.510302 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:42:22.510308 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:42:22.510374 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:42:22.539347 1685746 cri.go:96] found id: ""
	I1222 01:42:22.539374 1685746 logs.go:282] 0 containers: []
	W1222 01:42:22.539383 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:42:22.539391 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:42:22.539453 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:42:22.564154 1685746 cri.go:96] found id: ""
	I1222 01:42:22.564182 1685746 logs.go:282] 0 containers: []
	W1222 01:42:22.564193 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:42:22.564205 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:42:22.564311 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:42:22.593661 1685746 cri.go:96] found id: ""
	I1222 01:42:22.593688 1685746 logs.go:282] 0 containers: []
	W1222 01:42:22.593697 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:42:22.593703 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:42:22.593767 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:42:22.618629 1685746 cri.go:96] found id: ""
	I1222 01:42:22.618654 1685746 logs.go:282] 0 containers: []
	W1222 01:42:22.618663 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:42:22.618672 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:42:22.618714 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:42:22.675019 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:42:22.675057 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:42:22.690208 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:42:22.690241 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:42:22.759102 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:42:22.749007    8059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:22.749809    8059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:22.751450    8059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:22.752099    8059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:22.753804    8059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:42:22.749007    8059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:22.749809    8059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:22.751450    8059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:22.752099    8059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:22.753804    8059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:42:22.759127 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:42:22.759140 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:42:22.790419 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:42:22.790453 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:42:25.330239 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:42:25.341121 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:42:25.341190 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:42:25.370417 1685746 cri.go:96] found id: ""
	I1222 01:42:25.370493 1685746 logs.go:282] 0 containers: []
	W1222 01:42:25.370523 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:42:25.370543 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:42:25.370636 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:42:25.399975 1685746 cri.go:96] found id: ""
	I1222 01:42:25.400000 1685746 logs.go:282] 0 containers: []
	W1222 01:42:25.400009 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:42:25.400015 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:42:25.400075 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:42:25.424384 1685746 cri.go:96] found id: ""
	I1222 01:42:25.424414 1685746 logs.go:282] 0 containers: []
	W1222 01:42:25.424424 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:42:25.424431 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:42:25.424491 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:42:25.453828 1685746 cri.go:96] found id: ""
	I1222 01:42:25.453916 1685746 logs.go:282] 0 containers: []
	W1222 01:42:25.453956 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:42:25.453984 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:42:25.454124 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:42:25.480847 1685746 cri.go:96] found id: ""
	I1222 01:42:25.480868 1685746 logs.go:282] 0 containers: []
	W1222 01:42:25.480877 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:42:25.480883 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:42:25.480942 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:42:25.508776 1685746 cri.go:96] found id: ""
	I1222 01:42:25.508801 1685746 logs.go:282] 0 containers: []
	W1222 01:42:25.508810 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:42:25.508817 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:42:25.508877 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:42:25.539362 1685746 cri.go:96] found id: ""
	I1222 01:42:25.539385 1685746 logs.go:282] 0 containers: []
	W1222 01:42:25.539396 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:42:25.539402 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:42:25.539461 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:42:25.566615 1685746 cri.go:96] found id: ""
	I1222 01:42:25.566641 1685746 logs.go:282] 0 containers: []
	W1222 01:42:25.566650 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:42:25.566659 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:42:25.566670 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:42:25.622750 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:42:25.622784 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:42:25.638693 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:42:25.638728 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:42:25.702796 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:42:25.695589    8173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:25.695998    8173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:25.697467    8173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:25.697800    8173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:25.699229    8173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:42:25.695589    8173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:25.695998    8173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:25.697467    8173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:25.697800    8173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:25.699229    8173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:42:25.702823 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:42:25.702835 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:42:25.727901 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:42:25.727938 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:42:28.269113 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:42:28.280220 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:42:28.280317 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:42:28.305926 1685746 cri.go:96] found id: ""
	I1222 01:42:28.305948 1685746 logs.go:282] 0 containers: []
	W1222 01:42:28.305957 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:42:28.305963 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:42:28.306020 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:42:28.330985 1685746 cri.go:96] found id: ""
	I1222 01:42:28.331010 1685746 logs.go:282] 0 containers: []
	W1222 01:42:28.331020 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:42:28.331026 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:42:28.331086 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:42:28.357992 1685746 cri.go:96] found id: ""
	I1222 01:42:28.358018 1685746 logs.go:282] 0 containers: []
	W1222 01:42:28.358028 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:42:28.358035 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:42:28.358131 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:42:28.384559 1685746 cri.go:96] found id: ""
	I1222 01:42:28.384585 1685746 logs.go:282] 0 containers: []
	W1222 01:42:28.384594 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:42:28.384603 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:42:28.384665 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:42:28.412628 1685746 cri.go:96] found id: ""
	I1222 01:42:28.412650 1685746 logs.go:282] 0 containers: []
	W1222 01:42:28.412659 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:42:28.412665 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:42:28.412731 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:42:28.438582 1685746 cri.go:96] found id: ""
	I1222 01:42:28.438605 1685746 logs.go:282] 0 containers: []
	W1222 01:42:28.438613 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:42:28.438620 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:42:28.438685 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:42:28.468458 1685746 cri.go:96] found id: ""
	I1222 01:42:28.468484 1685746 logs.go:282] 0 containers: []
	W1222 01:42:28.468493 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:42:28.468500 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:42:28.468565 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:42:28.493207 1685746 cri.go:96] found id: ""
	I1222 01:42:28.493231 1685746 logs.go:282] 0 containers: []
	W1222 01:42:28.493239 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:42:28.493249 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:42:28.493260 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:42:28.547741 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:42:28.547777 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:42:28.562578 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:42:28.562608 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:42:28.637227 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:42:28.629669    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:28.630219    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:28.631680    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:28.632052    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:28.633501    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:42:28.629669    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:28.630219    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:28.631680    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:28.632052    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:28.633501    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:42:28.637250 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:42:28.637263 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:42:28.662593 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:42:28.662632 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:42:31.190941 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:42:31.202783 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:42:31.202858 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:42:31.227601 1685746 cri.go:96] found id: ""
	I1222 01:42:31.227625 1685746 logs.go:282] 0 containers: []
	W1222 01:42:31.227633 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:42:31.227642 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:42:31.227718 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:42:31.267011 1685746 cri.go:96] found id: ""
	I1222 01:42:31.267040 1685746 logs.go:282] 0 containers: []
	W1222 01:42:31.267049 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:42:31.267056 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:42:31.267118 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:42:31.363207 1685746 cri.go:96] found id: ""
	I1222 01:42:31.363231 1685746 logs.go:282] 0 containers: []
	W1222 01:42:31.363239 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:42:31.363246 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:42:31.363320 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:42:31.412753 1685746 cri.go:96] found id: ""
	I1222 01:42:31.412780 1685746 logs.go:282] 0 containers: []
	W1222 01:42:31.412788 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:42:31.412796 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:42:31.412858 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:42:31.453115 1685746 cri.go:96] found id: ""
	I1222 01:42:31.453145 1685746 logs.go:282] 0 containers: []
	W1222 01:42:31.453154 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:42:31.453167 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:42:31.453225 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:42:31.492529 1685746 cri.go:96] found id: ""
	I1222 01:42:31.492550 1685746 logs.go:282] 0 containers: []
	W1222 01:42:31.492558 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:42:31.492565 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:42:31.492621 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:42:31.529156 1685746 cri.go:96] found id: ""
	I1222 01:42:31.529179 1685746 logs.go:282] 0 containers: []
	W1222 01:42:31.529187 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:42:31.529193 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:42:31.529252 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:42:31.561255 1685746 cri.go:96] found id: ""
	I1222 01:42:31.561283 1685746 logs.go:282] 0 containers: []
	W1222 01:42:31.561292 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:42:31.561301 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:42:31.561314 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:42:31.622500 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:42:31.622526 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:42:31.690749 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:42:31.690784 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:42:31.706062 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:42:31.706182 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:42:31.827329 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:42:31.792352    8407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:31.804228    8407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:31.804969    8407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:31.820004    8407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:31.820674    8407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:42:31.792352    8407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:31.804228    8407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:31.804969    8407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:31.820004    8407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:31.820674    8407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:42:31.827354 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:42:31.827369 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:42:34.368888 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:42:34.380077 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:42:34.380154 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:42:34.406174 1685746 cri.go:96] found id: ""
	I1222 01:42:34.406198 1685746 logs.go:282] 0 containers: []
	W1222 01:42:34.406207 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:42:34.406213 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:42:34.406280 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:42:34.437127 1685746 cri.go:96] found id: ""
	I1222 01:42:34.437152 1685746 logs.go:282] 0 containers: []
	W1222 01:42:34.437161 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:42:34.437168 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:42:34.437234 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:42:34.462419 1685746 cri.go:96] found id: ""
	I1222 01:42:34.462445 1685746 logs.go:282] 0 containers: []
	W1222 01:42:34.462454 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:42:34.462463 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:42:34.462524 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:42:34.491011 1685746 cri.go:96] found id: ""
	I1222 01:42:34.491039 1685746 logs.go:282] 0 containers: []
	W1222 01:42:34.491049 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:42:34.491056 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:42:34.491117 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:42:34.515544 1685746 cri.go:96] found id: ""
	I1222 01:42:34.515570 1685746 logs.go:282] 0 containers: []
	W1222 01:42:34.515580 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:42:34.515587 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:42:34.515644 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:42:34.543686 1685746 cri.go:96] found id: ""
	I1222 01:42:34.543714 1685746 logs.go:282] 0 containers: []
	W1222 01:42:34.543722 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:42:34.543730 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:42:34.543788 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:42:34.572402 1685746 cri.go:96] found id: ""
	I1222 01:42:34.572427 1685746 logs.go:282] 0 containers: []
	W1222 01:42:34.572436 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:42:34.572442 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:42:34.572561 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:42:34.597762 1685746 cri.go:96] found id: ""
	I1222 01:42:34.597789 1685746 logs.go:282] 0 containers: []
	W1222 01:42:34.597799 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:42:34.597808 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:42:34.597820 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:42:34.622955 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:42:34.622991 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:42:34.651563 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:42:34.651592 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:42:34.708102 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:42:34.708139 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:42:34.723329 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:42:34.723358 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:42:34.788870 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:42:34.780572    8523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:34.781230    8523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:34.782852    8523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:34.783472    8523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:34.785097    8523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:42:34.780572    8523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:34.781230    8523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:34.782852    8523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:34.783472    8523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:34.785097    8523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:42:37.289033 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:42:37.307914 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:42:37.308010 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:42:37.342876 1685746 cri.go:96] found id: ""
	I1222 01:42:37.342916 1685746 logs.go:282] 0 containers: []
	W1222 01:42:37.342925 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:42:37.342932 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:42:37.342994 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:42:37.369883 1685746 cri.go:96] found id: ""
	I1222 01:42:37.369912 1685746 logs.go:282] 0 containers: []
	W1222 01:42:37.369921 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:42:37.369928 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:42:37.369990 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:42:37.399765 1685746 cri.go:96] found id: ""
	I1222 01:42:37.399792 1685746 logs.go:282] 0 containers: []
	W1222 01:42:37.399800 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:42:37.399807 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:42:37.399887 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:42:37.425866 1685746 cri.go:96] found id: ""
	I1222 01:42:37.425894 1685746 logs.go:282] 0 containers: []
	W1222 01:42:37.425904 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:42:37.425911 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:42:37.425976 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:42:37.452177 1685746 cri.go:96] found id: ""
	I1222 01:42:37.452252 1685746 logs.go:282] 0 containers: []
	W1222 01:42:37.452273 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:42:37.452280 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:42:37.452349 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:42:37.478374 1685746 cri.go:96] found id: ""
	I1222 01:42:37.478405 1685746 logs.go:282] 0 containers: []
	W1222 01:42:37.478415 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:42:37.478421 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:42:37.478482 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:42:37.504627 1685746 cri.go:96] found id: ""
	I1222 01:42:37.504663 1685746 logs.go:282] 0 containers: []
	W1222 01:42:37.504672 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:42:37.504679 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:42:37.504785 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:42:37.531304 1685746 cri.go:96] found id: ""
	I1222 01:42:37.531343 1685746 logs.go:282] 0 containers: []
	W1222 01:42:37.531353 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:42:37.531380 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:42:37.531399 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:42:37.559371 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:42:37.559401 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:42:37.614026 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:42:37.614064 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:42:37.630657 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:42:37.630689 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:42:37.698972 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:42:37.690733    8633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:37.691573    8633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:37.693410    8633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:37.694215    8633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:37.695368    8633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:42:37.690733    8633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:37.691573    8633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:37.693410    8633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:37.694215    8633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:37.695368    8633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:42:37.698998 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:42:37.699010 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:42:40.226630 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:42:40.251806 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:42:40.251880 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:42:40.312461 1685746 cri.go:96] found id: ""
	I1222 01:42:40.312484 1685746 logs.go:282] 0 containers: []
	W1222 01:42:40.312493 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:42:40.312499 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:42:40.312559 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:42:40.346654 1685746 cri.go:96] found id: ""
	I1222 01:42:40.346682 1685746 logs.go:282] 0 containers: []
	W1222 01:42:40.346691 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:42:40.346697 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:42:40.346757 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:42:40.376245 1685746 cri.go:96] found id: ""
	I1222 01:42:40.376279 1685746 logs.go:282] 0 containers: []
	W1222 01:42:40.376288 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:42:40.376294 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:42:40.376355 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:42:40.400546 1685746 cri.go:96] found id: ""
	I1222 01:42:40.400572 1685746 logs.go:282] 0 containers: []
	W1222 01:42:40.400581 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:42:40.400588 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:42:40.400647 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:42:40.425326 1685746 cri.go:96] found id: ""
	I1222 01:42:40.425353 1685746 logs.go:282] 0 containers: []
	W1222 01:42:40.425362 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:42:40.425369 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:42:40.425431 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:42:40.449304 1685746 cri.go:96] found id: ""
	I1222 01:42:40.449328 1685746 logs.go:282] 0 containers: []
	W1222 01:42:40.449337 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:42:40.449345 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:42:40.449405 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:42:40.474828 1685746 cri.go:96] found id: ""
	I1222 01:42:40.474854 1685746 logs.go:282] 0 containers: []
	W1222 01:42:40.474863 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:42:40.474870 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:42:40.474931 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:42:40.503909 1685746 cri.go:96] found id: ""
	I1222 01:42:40.503933 1685746 logs.go:282] 0 containers: []
	W1222 01:42:40.503941 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:42:40.503950 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:42:40.503960 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:42:40.559784 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:42:40.559821 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:42:40.575010 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:42:40.575041 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:42:40.643863 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:42:40.635244    8736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:40.635950    8736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:40.637730    8736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:40.638261    8736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:40.639693    8736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:42:40.635244    8736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:40.635950    8736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:40.637730    8736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:40.638261    8736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:40.639693    8736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:42:40.643888 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:42:40.643900 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:42:40.674641 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:42:40.674683 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:42:43.208931 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:42:43.219892 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:42:43.219965 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:42:43.278356 1685746 cri.go:96] found id: ""
	I1222 01:42:43.278383 1685746 logs.go:282] 0 containers: []
	W1222 01:42:43.278393 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:42:43.278399 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:42:43.278468 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:42:43.318802 1685746 cri.go:96] found id: ""
	I1222 01:42:43.318828 1685746 logs.go:282] 0 containers: []
	W1222 01:42:43.318838 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:42:43.318844 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:42:43.318903 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:42:43.351222 1685746 cri.go:96] found id: ""
	I1222 01:42:43.351247 1685746 logs.go:282] 0 containers: []
	W1222 01:42:43.351256 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:42:43.351263 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:42:43.351323 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:42:43.377242 1685746 cri.go:96] found id: ""
	I1222 01:42:43.377267 1685746 logs.go:282] 0 containers: []
	W1222 01:42:43.377275 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:42:43.377282 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:42:43.377346 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:42:43.403326 1685746 cri.go:96] found id: ""
	I1222 01:42:43.403353 1685746 logs.go:282] 0 containers: []
	W1222 01:42:43.403363 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:42:43.403370 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:42:43.403459 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:42:43.429205 1685746 cri.go:96] found id: ""
	I1222 01:42:43.429232 1685746 logs.go:282] 0 containers: []
	W1222 01:42:43.429241 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:42:43.429248 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:42:43.429351 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:42:43.455157 1685746 cri.go:96] found id: ""
	I1222 01:42:43.455188 1685746 logs.go:282] 0 containers: []
	W1222 01:42:43.455198 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:42:43.455204 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:42:43.455274 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:42:43.484817 1685746 cri.go:96] found id: ""
	I1222 01:42:43.484846 1685746 logs.go:282] 0 containers: []
	W1222 01:42:43.484856 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:42:43.484866 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:42:43.484877 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:42:43.544248 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:42:43.544285 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:42:43.559152 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:42:43.559184 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:42:43.623520 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:42:43.614277    8850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:43.615133    8850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:43.616676    8850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:43.617246    8850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:43.619047    8850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:42:43.614277    8850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:43.615133    8850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:43.616676    8850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:43.617246    8850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:43.619047    8850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:42:43.623546 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:42:43.623559 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:42:43.648911 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:42:43.648951 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:42:46.182386 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:42:46.193692 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:42:46.193766 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:42:46.219554 1685746 cri.go:96] found id: ""
	I1222 01:42:46.219592 1685746 logs.go:282] 0 containers: []
	W1222 01:42:46.219602 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:42:46.219608 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:42:46.219667 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:42:46.269097 1685746 cri.go:96] found id: ""
	I1222 01:42:46.269128 1685746 logs.go:282] 0 containers: []
	W1222 01:42:46.269137 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:42:46.269152 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:42:46.269215 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:42:46.315573 1685746 cri.go:96] found id: ""
	I1222 01:42:46.315609 1685746 logs.go:282] 0 containers: []
	W1222 01:42:46.315619 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:42:46.315627 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:42:46.315698 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:42:46.354254 1685746 cri.go:96] found id: ""
	I1222 01:42:46.354291 1685746 logs.go:282] 0 containers: []
	W1222 01:42:46.354300 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:42:46.354311 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:42:46.354385 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:42:46.382733 1685746 cri.go:96] found id: ""
	I1222 01:42:46.382810 1685746 logs.go:282] 0 containers: []
	W1222 01:42:46.382823 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:42:46.382831 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:42:46.382893 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:42:46.409988 1685746 cri.go:96] found id: ""
	I1222 01:42:46.410014 1685746 logs.go:282] 0 containers: []
	W1222 01:42:46.410024 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:42:46.410032 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:42:46.410123 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:42:46.440621 1685746 cri.go:96] found id: ""
	I1222 01:42:46.440645 1685746 logs.go:282] 0 containers: []
	W1222 01:42:46.440654 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:42:46.440661 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:42:46.440726 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:42:46.466426 1685746 cri.go:96] found id: ""
	I1222 01:42:46.466451 1685746 logs.go:282] 0 containers: []
	W1222 01:42:46.466461 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:42:46.466478 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:42:46.466491 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:42:46.522404 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:42:46.522449 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:42:46.538001 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:42:46.538129 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:42:46.608273 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:42:46.599659    8965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:46.600513    8965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:46.602269    8965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:46.602918    8965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:46.604666    8965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:42:46.599659    8965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:46.600513    8965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:46.602269    8965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:46.602918    8965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:46.604666    8965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:42:46.608296 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:42:46.608311 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:42:46.634354 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:42:46.634388 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:42:49.167965 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:42:49.178919 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:42:49.178992 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:42:49.204884 1685746 cri.go:96] found id: ""
	I1222 01:42:49.204909 1685746 logs.go:282] 0 containers: []
	W1222 01:42:49.204917 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:42:49.204924 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:42:49.204992 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:42:49.231503 1685746 cri.go:96] found id: ""
	I1222 01:42:49.231530 1685746 logs.go:282] 0 containers: []
	W1222 01:42:49.231539 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:42:49.231547 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:42:49.231611 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:42:49.274476 1685746 cri.go:96] found id: ""
	I1222 01:42:49.274500 1685746 logs.go:282] 0 containers: []
	W1222 01:42:49.274508 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:42:49.274515 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:42:49.274577 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:42:49.318032 1685746 cri.go:96] found id: ""
	I1222 01:42:49.318054 1685746 logs.go:282] 0 containers: []
	W1222 01:42:49.318063 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:42:49.318069 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:42:49.318163 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:42:49.361375 1685746 cri.go:96] found id: ""
	I1222 01:42:49.361398 1685746 logs.go:282] 0 containers: []
	W1222 01:42:49.361407 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:42:49.361414 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:42:49.361475 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:42:49.389203 1685746 cri.go:96] found id: ""
	I1222 01:42:49.389230 1685746 logs.go:282] 0 containers: []
	W1222 01:42:49.389240 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:42:49.389247 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:42:49.389315 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:42:49.419554 1685746 cri.go:96] found id: ""
	I1222 01:42:49.419579 1685746 logs.go:282] 0 containers: []
	W1222 01:42:49.419588 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:42:49.419595 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:42:49.419656 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:42:49.448457 1685746 cri.go:96] found id: ""
	I1222 01:42:49.448482 1685746 logs.go:282] 0 containers: []
	W1222 01:42:49.448491 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:42:49.448501 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:42:49.448513 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:42:49.477586 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:42:49.477616 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:42:49.534782 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:42:49.534822 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:42:49.550136 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:42:49.550166 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:42:49.618143 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:42:49.609211    9092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:49.610126    9092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:49.611985    9092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:49.612723    9092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:49.614504    9092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:42:49.609211    9092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:49.610126    9092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:49.611985    9092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:49.612723    9092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:49.614504    9092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:42:49.618169 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:42:49.618190 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:42:52.144370 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:42:52.155874 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:42:52.155999 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:42:52.183608 1685746 cri.go:96] found id: ""
	I1222 01:42:52.183633 1685746 logs.go:282] 0 containers: []
	W1222 01:42:52.183641 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:42:52.183648 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:42:52.183710 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:42:52.213975 1685746 cri.go:96] found id: ""
	I1222 01:42:52.214002 1685746 logs.go:282] 0 containers: []
	W1222 01:42:52.214011 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:42:52.214018 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:42:52.214108 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:42:52.260878 1685746 cri.go:96] found id: ""
	I1222 01:42:52.260904 1685746 logs.go:282] 0 containers: []
	W1222 01:42:52.260913 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:42:52.260920 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:42:52.260986 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:42:52.326163 1685746 cri.go:96] found id: ""
	I1222 01:42:52.326191 1685746 logs.go:282] 0 containers: []
	W1222 01:42:52.326200 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:42:52.326206 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:42:52.326268 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:42:52.351586 1685746 cri.go:96] found id: ""
	I1222 01:42:52.351610 1685746 logs.go:282] 0 containers: []
	W1222 01:42:52.351619 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:42:52.351625 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:42:52.351685 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:42:52.378191 1685746 cri.go:96] found id: ""
	I1222 01:42:52.378271 1685746 logs.go:282] 0 containers: []
	W1222 01:42:52.378297 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:42:52.378320 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:42:52.378423 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:42:52.403988 1685746 cri.go:96] found id: ""
	I1222 01:42:52.404014 1685746 logs.go:282] 0 containers: []
	W1222 01:42:52.404024 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:42:52.404030 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:42:52.404115 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:42:52.434842 1685746 cri.go:96] found id: ""
	I1222 01:42:52.434870 1685746 logs.go:282] 0 containers: []
	W1222 01:42:52.434879 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:42:52.434888 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:42:52.434901 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:42:52.493615 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:42:52.493659 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:42:52.509970 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:42:52.510008 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:42:52.573713 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:42:52.565508    9193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:52.566173    9193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:52.567830    9193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:52.568498    9193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:52.570110    9193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:42:52.565508    9193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:52.566173    9193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:52.567830    9193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:52.568498    9193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:52.570110    9193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:42:52.573748 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:42:52.573760 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:42:52.598497 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:42:52.598532 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:42:55.130037 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:42:55.141017 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:42:55.141094 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:42:55.166253 1685746 cri.go:96] found id: ""
	I1222 01:42:55.166279 1685746 logs.go:282] 0 containers: []
	W1222 01:42:55.166289 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:42:55.166298 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:42:55.166358 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:42:55.190818 1685746 cri.go:96] found id: ""
	I1222 01:42:55.190844 1685746 logs.go:282] 0 containers: []
	W1222 01:42:55.190856 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:42:55.190863 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:42:55.190969 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:42:55.216347 1685746 cri.go:96] found id: ""
	I1222 01:42:55.216380 1685746 logs.go:282] 0 containers: []
	W1222 01:42:55.216390 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:42:55.216397 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:42:55.216501 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:42:55.259015 1685746 cri.go:96] found id: ""
	I1222 01:42:55.259091 1685746 logs.go:282] 0 containers: []
	W1222 01:42:55.259115 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:42:55.259135 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:42:55.259247 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:42:55.326026 1685746 cri.go:96] found id: ""
	I1222 01:42:55.326049 1685746 logs.go:282] 0 containers: []
	W1222 01:42:55.326058 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:42:55.326065 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:42:55.326147 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:42:55.350799 1685746 cri.go:96] found id: ""
	I1222 01:42:55.350823 1685746 logs.go:282] 0 containers: []
	W1222 01:42:55.350832 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:42:55.350839 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:42:55.350899 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:42:55.376097 1685746 cri.go:96] found id: ""
	I1222 01:42:55.376123 1685746 logs.go:282] 0 containers: []
	W1222 01:42:55.376133 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:42:55.376139 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:42:55.376200 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:42:55.401620 1685746 cri.go:96] found id: ""
	I1222 01:42:55.401693 1685746 logs.go:282] 0 containers: []
	W1222 01:42:55.401715 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:42:55.401740 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:42:55.401783 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:42:55.434315 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:42:55.434343 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:42:55.489616 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:42:55.489652 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:42:55.504798 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:42:55.504829 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:42:55.569246 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:42:55.560833    9315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:55.561495    9315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:55.563008    9315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:55.563457    9315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:55.564945    9315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:42:55.560833    9315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:55.561495    9315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:55.563008    9315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:55.563457    9315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:55.564945    9315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:42:55.569273 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:42:55.569285 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:42:58.094905 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:42:58.105827 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:42:58.105902 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:42:58.131496 1685746 cri.go:96] found id: ""
	I1222 01:42:58.131522 1685746 logs.go:282] 0 containers: []
	W1222 01:42:58.131531 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:42:58.131538 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:42:58.131602 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:42:58.156152 1685746 cri.go:96] found id: ""
	I1222 01:42:58.156179 1685746 logs.go:282] 0 containers: []
	W1222 01:42:58.156188 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:42:58.156195 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:42:58.156253 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:42:58.182075 1685746 cri.go:96] found id: ""
	I1222 01:42:58.182124 1685746 logs.go:282] 0 containers: []
	W1222 01:42:58.182140 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:42:58.182147 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:42:58.182211 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:42:58.212714 1685746 cri.go:96] found id: ""
	I1222 01:42:58.212737 1685746 logs.go:282] 0 containers: []
	W1222 01:42:58.212746 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:42:58.212752 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:42:58.212811 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:42:58.256896 1685746 cri.go:96] found id: ""
	I1222 01:42:58.256919 1685746 logs.go:282] 0 containers: []
	W1222 01:42:58.256931 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:42:58.256938 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:42:58.257002 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:42:58.314212 1685746 cri.go:96] found id: ""
	I1222 01:42:58.314235 1685746 logs.go:282] 0 containers: []
	W1222 01:42:58.314243 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:42:58.314250 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:42:58.314311 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:42:58.348822 1685746 cri.go:96] found id: ""
	I1222 01:42:58.348844 1685746 logs.go:282] 0 containers: []
	W1222 01:42:58.348853 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:42:58.348860 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:42:58.349006 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:42:58.375112 1685746 cri.go:96] found id: ""
	I1222 01:42:58.375139 1685746 logs.go:282] 0 containers: []
	W1222 01:42:58.375148 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:42:58.375157 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:42:58.375199 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:42:58.440769 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:42:58.432188    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:58.432720    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:58.434403    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:58.435024    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:58.436797    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:42:58.432188    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:58.432720    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:58.434403    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:58.435024    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:58.436797    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:42:58.440793 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:42:58.440807 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:42:58.466180 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:42:58.466214 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:42:58.498249 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:42:58.498277 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:42:58.553912 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:42:58.553948 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:43:01.069587 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:43:01.080494 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:43:01.080569 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:43:01.106366 1685746 cri.go:96] found id: ""
	I1222 01:43:01.106393 1685746 logs.go:282] 0 containers: []
	W1222 01:43:01.106403 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:43:01.106409 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:43:01.106472 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:43:01.134991 1685746 cri.go:96] found id: ""
	I1222 01:43:01.135019 1685746 logs.go:282] 0 containers: []
	W1222 01:43:01.135028 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:43:01.135040 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:43:01.135108 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:43:01.161160 1685746 cri.go:96] found id: ""
	I1222 01:43:01.161188 1685746 logs.go:282] 0 containers: []
	W1222 01:43:01.161198 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:43:01.161205 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:43:01.161268 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:43:01.189244 1685746 cri.go:96] found id: ""
	I1222 01:43:01.189271 1685746 logs.go:282] 0 containers: []
	W1222 01:43:01.189281 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:43:01.189288 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:43:01.189353 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:43:01.216039 1685746 cri.go:96] found id: ""
	I1222 01:43:01.216109 1685746 logs.go:282] 0 containers: []
	W1222 01:43:01.216123 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:43:01.216131 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:43:01.216206 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:43:01.255772 1685746 cri.go:96] found id: ""
	I1222 01:43:01.255803 1685746 logs.go:282] 0 containers: []
	W1222 01:43:01.255812 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:43:01.255818 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:43:01.255880 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:43:01.331745 1685746 cri.go:96] found id: ""
	I1222 01:43:01.331771 1685746 logs.go:282] 0 containers: []
	W1222 01:43:01.331780 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:43:01.331787 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:43:01.331856 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:43:01.360958 1685746 cri.go:96] found id: ""
	I1222 01:43:01.360985 1685746 logs.go:282] 0 containers: []
	W1222 01:43:01.360995 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:43:01.361003 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:43:01.361014 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:43:01.416443 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:43:01.416479 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:43:01.433706 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:43:01.433735 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:43:01.504365 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:43:01.496722    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:01.497337    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:01.498577    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:01.499029    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:01.500569    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:43:01.496722    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:01.497337    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:01.498577    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:01.499029    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:01.500569    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:43:01.504393 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:43:01.504405 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:43:01.530386 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:43:01.530421 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:43:04.060702 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:43:04.074701 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:43:04.074781 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:43:04.104007 1685746 cri.go:96] found id: ""
	I1222 01:43:04.104034 1685746 logs.go:282] 0 containers: []
	W1222 01:43:04.104043 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:43:04.104050 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:43:04.104110 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:43:04.129051 1685746 cri.go:96] found id: ""
	I1222 01:43:04.129081 1685746 logs.go:282] 0 containers: []
	W1222 01:43:04.129091 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:43:04.129098 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:43:04.129160 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:43:04.155234 1685746 cri.go:96] found id: ""
	I1222 01:43:04.155260 1685746 logs.go:282] 0 containers: []
	W1222 01:43:04.155275 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:43:04.155282 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:43:04.155344 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:43:04.180095 1685746 cri.go:96] found id: ""
	I1222 01:43:04.180120 1685746 logs.go:282] 0 containers: []
	W1222 01:43:04.180130 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:43:04.180137 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:43:04.180199 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:43:04.204953 1685746 cri.go:96] found id: ""
	I1222 01:43:04.204976 1685746 logs.go:282] 0 containers: []
	W1222 01:43:04.204984 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:43:04.204991 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:43:04.205052 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:43:04.231351 1685746 cri.go:96] found id: ""
	I1222 01:43:04.231376 1685746 logs.go:282] 0 containers: []
	W1222 01:43:04.231385 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:43:04.231392 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:43:04.231452 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:43:04.269450 1685746 cri.go:96] found id: ""
	I1222 01:43:04.269476 1685746 logs.go:282] 0 containers: []
	W1222 01:43:04.269485 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:43:04.269492 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:43:04.269556 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:43:04.310137 1685746 cri.go:96] found id: ""
	I1222 01:43:04.310210 1685746 logs.go:282] 0 containers: []
	W1222 01:43:04.310247 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:43:04.310276 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:43:04.310304 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:43:04.330066 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:43:04.330204 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:43:04.398531 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:43:04.389653    9635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:04.390276    9635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:04.391374    9635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:04.392896    9635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:04.393293    9635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:43:04.389653    9635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:04.390276    9635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:04.391374    9635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:04.392896    9635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:04.393293    9635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:43:04.398600 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:43:04.398622 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:43:04.423684 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:43:04.423715 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:43:04.455847 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:43:04.455915 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:43:07.011267 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:43:07.022247 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:43:07.022373 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:43:07.047710 1685746 cri.go:96] found id: ""
	I1222 01:43:07.047737 1685746 logs.go:282] 0 containers: []
	W1222 01:43:07.047746 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:43:07.047755 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:43:07.047817 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:43:07.071622 1685746 cri.go:96] found id: ""
	I1222 01:43:07.071644 1685746 logs.go:282] 0 containers: []
	W1222 01:43:07.071653 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:43:07.071662 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:43:07.071724 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:43:07.100514 1685746 cri.go:96] found id: ""
	I1222 01:43:07.100539 1685746 logs.go:282] 0 containers: []
	W1222 01:43:07.100548 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:43:07.100555 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:43:07.100622 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:43:07.126740 1685746 cri.go:96] found id: ""
	I1222 01:43:07.126810 1685746 logs.go:282] 0 containers: []
	W1222 01:43:07.126833 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:43:07.126845 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:43:07.126921 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:43:07.156147 1685746 cri.go:96] found id: ""
	I1222 01:43:07.156174 1685746 logs.go:282] 0 containers: []
	W1222 01:43:07.156184 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:43:07.156190 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:43:07.156268 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:43:07.185551 1685746 cri.go:96] found id: ""
	I1222 01:43:07.185574 1685746 logs.go:282] 0 containers: []
	W1222 01:43:07.185583 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:43:07.185589 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:43:07.185670 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:43:07.210495 1685746 cri.go:96] found id: ""
	I1222 01:43:07.210563 1685746 logs.go:282] 0 containers: []
	W1222 01:43:07.210585 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:43:07.210608 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:43:07.210679 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:43:07.234671 1685746 cri.go:96] found id: ""
	I1222 01:43:07.234751 1685746 logs.go:282] 0 containers: []
	W1222 01:43:07.234775 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:43:07.234799 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:43:07.234847 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:43:07.318902 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:43:07.318936 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:43:07.334947 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:43:07.334977 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:43:07.400498 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:43:07.392484    9748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:07.392898    9748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:07.394668    9748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:07.395165    9748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:07.396785    9748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:43:07.392484    9748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:07.392898    9748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:07.394668    9748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:07.395165    9748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:07.396785    9748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:43:07.400520 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:43:07.400534 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:43:07.425576 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:43:07.425613 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:43:09.957230 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:43:09.968065 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:43:09.968142 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:43:09.993760 1685746 cri.go:96] found id: ""
	I1222 01:43:09.993785 1685746 logs.go:282] 0 containers: []
	W1222 01:43:09.993794 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:43:09.993802 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:43:09.993870 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:43:10.024110 1685746 cri.go:96] found id: ""
	I1222 01:43:10.024140 1685746 logs.go:282] 0 containers: []
	W1222 01:43:10.024151 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:43:10.024157 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:43:10.024232 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:43:10.053092 1685746 cri.go:96] found id: ""
	I1222 01:43:10.053122 1685746 logs.go:282] 0 containers: []
	W1222 01:43:10.053132 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:43:10.053138 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:43:10.053203 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:43:10.078967 1685746 cri.go:96] found id: ""
	I1222 01:43:10.078994 1685746 logs.go:282] 0 containers: []
	W1222 01:43:10.079004 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:43:10.079011 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:43:10.079079 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:43:10.105969 1685746 cri.go:96] found id: ""
	I1222 01:43:10.105993 1685746 logs.go:282] 0 containers: []
	W1222 01:43:10.106001 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:43:10.106008 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:43:10.106164 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:43:10.132413 1685746 cri.go:96] found id: ""
	I1222 01:43:10.132448 1685746 logs.go:282] 0 containers: []
	W1222 01:43:10.132457 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:43:10.132464 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:43:10.132526 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:43:10.158912 1685746 cri.go:96] found id: ""
	I1222 01:43:10.158941 1685746 logs.go:282] 0 containers: []
	W1222 01:43:10.158950 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:43:10.158957 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:43:10.159038 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:43:10.185594 1685746 cri.go:96] found id: ""
	I1222 01:43:10.185621 1685746 logs.go:282] 0 containers: []
	W1222 01:43:10.185630 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:43:10.185639 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:43:10.185681 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:43:10.214349 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:43:10.214378 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:43:10.274002 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:43:10.274096 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:43:10.289686 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:43:10.289761 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:43:10.375337 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:43:10.363816    9875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:10.364613    9875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:10.366668    9875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:10.368169    9875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:10.370480    9875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:43:10.363816    9875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:10.364613    9875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:10.366668    9875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:10.368169    9875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:10.370480    9875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:43:10.375413 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:43:10.375441 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:43:12.901196 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:43:12.911625 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:43:12.911710 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:43:12.936713 1685746 cri.go:96] found id: ""
	I1222 01:43:12.936738 1685746 logs.go:282] 0 containers: []
	W1222 01:43:12.936747 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:43:12.936753 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:43:12.936827 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:43:12.961849 1685746 cri.go:96] found id: ""
	I1222 01:43:12.961870 1685746 logs.go:282] 0 containers: []
	W1222 01:43:12.961879 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:43:12.961888 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:43:12.961950 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:43:12.990893 1685746 cri.go:96] found id: ""
	I1222 01:43:12.990919 1685746 logs.go:282] 0 containers: []
	W1222 01:43:12.990929 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:43:12.990935 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:43:12.990996 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:43:13.033584 1685746 cri.go:96] found id: ""
	I1222 01:43:13.033611 1685746 logs.go:282] 0 containers: []
	W1222 01:43:13.033621 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:43:13.033628 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:43:13.033691 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:43:13.062192 1685746 cri.go:96] found id: ""
	I1222 01:43:13.062216 1685746 logs.go:282] 0 containers: []
	W1222 01:43:13.062225 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:43:13.062232 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:43:13.062297 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:43:13.088173 1685746 cri.go:96] found id: ""
	I1222 01:43:13.088213 1685746 logs.go:282] 0 containers: []
	W1222 01:43:13.088223 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:43:13.088230 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:43:13.088312 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:43:13.115014 1685746 cri.go:96] found id: ""
	I1222 01:43:13.115051 1685746 logs.go:282] 0 containers: []
	W1222 01:43:13.115062 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:43:13.115069 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:43:13.115147 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:43:13.140656 1685746 cri.go:96] found id: ""
	I1222 01:43:13.140691 1685746 logs.go:282] 0 containers: []
	W1222 01:43:13.140700 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:43:13.140710 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:43:13.140722 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:43:13.177585 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:43:13.177660 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:43:13.233128 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:43:13.233162 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:43:13.251827 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:43:13.251907 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:43:13.360494 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:43:13.352114    9990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:13.352851    9990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:13.354529    9990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:13.355189    9990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:13.356849    9990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:43:13.352114    9990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:13.352851    9990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:13.354529    9990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:13.355189    9990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:13.356849    9990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:43:13.360570 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:43:13.360589 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:43:15.887876 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:43:15.898631 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:43:15.898708 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:43:15.923707 1685746 cri.go:96] found id: ""
	I1222 01:43:15.923732 1685746 logs.go:282] 0 containers: []
	W1222 01:43:15.923743 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:43:15.923750 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:43:15.923829 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:43:15.950453 1685746 cri.go:96] found id: ""
	I1222 01:43:15.950478 1685746 logs.go:282] 0 containers: []
	W1222 01:43:15.950492 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:43:15.950498 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:43:15.950612 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:43:15.975355 1685746 cri.go:96] found id: ""
	I1222 01:43:15.975436 1685746 logs.go:282] 0 containers: []
	W1222 01:43:15.975460 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:43:15.975475 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:43:15.975549 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:43:16.000992 1685746 cri.go:96] found id: ""
	I1222 01:43:16.001026 1685746 logs.go:282] 0 containers: []
	W1222 01:43:16.001036 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:43:16.001043 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:43:16.001134 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:43:16.033538 1685746 cri.go:96] found id: ""
	I1222 01:43:16.033563 1685746 logs.go:282] 0 containers: []
	W1222 01:43:16.033572 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:43:16.033578 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:43:16.033641 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:43:16.059451 1685746 cri.go:96] found id: ""
	I1222 01:43:16.059476 1685746 logs.go:282] 0 containers: []
	W1222 01:43:16.059486 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:43:16.059492 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:43:16.059556 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:43:16.085491 1685746 cri.go:96] found id: ""
	I1222 01:43:16.085515 1685746 logs.go:282] 0 containers: []
	W1222 01:43:16.085524 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:43:16.085530 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:43:16.085598 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:43:16.111197 1685746 cri.go:96] found id: ""
	I1222 01:43:16.111220 1685746 logs.go:282] 0 containers: []
	W1222 01:43:16.111228 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:43:16.111237 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:43:16.111249 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:43:16.167058 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:43:16.167095 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:43:16.182867 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:43:16.182947 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:43:16.303679 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:43:16.271313   10085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:16.272086   10085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:16.276862   10085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:16.277203   10085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:16.278713   10085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:43:16.271313   10085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:16.272086   10085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:16.276862   10085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:16.277203   10085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:16.278713   10085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:43:16.303753 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:43:16.303780 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:43:16.336416 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:43:16.336497 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:43:18.869703 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:43:18.880527 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:43:18.880602 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:43:18.906051 1685746 cri.go:96] found id: ""
	I1222 01:43:18.906102 1685746 logs.go:282] 0 containers: []
	W1222 01:43:18.906112 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:43:18.906119 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:43:18.906181 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:43:18.931999 1685746 cri.go:96] found id: ""
	I1222 01:43:18.932027 1685746 logs.go:282] 0 containers: []
	W1222 01:43:18.932036 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:43:18.932043 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:43:18.932110 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:43:18.959202 1685746 cri.go:96] found id: ""
	I1222 01:43:18.959230 1685746 logs.go:282] 0 containers: []
	W1222 01:43:18.959239 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:43:18.959246 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:43:18.959307 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:43:18.988050 1685746 cri.go:96] found id: ""
	I1222 01:43:18.988075 1685746 logs.go:282] 0 containers: []
	W1222 01:43:18.988084 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:43:18.988091 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:43:18.988179 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:43:19.014062 1685746 cri.go:96] found id: ""
	I1222 01:43:19.014116 1685746 logs.go:282] 0 containers: []
	W1222 01:43:19.014125 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:43:19.014132 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:43:19.014197 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:43:19.041419 1685746 cri.go:96] found id: ""
	I1222 01:43:19.041454 1685746 logs.go:282] 0 containers: []
	W1222 01:43:19.041464 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:43:19.041471 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:43:19.041548 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:43:19.067079 1685746 cri.go:96] found id: ""
	I1222 01:43:19.067114 1685746 logs.go:282] 0 containers: []
	W1222 01:43:19.067123 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:43:19.067130 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:43:19.067199 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:43:19.093005 1685746 cri.go:96] found id: ""
	I1222 01:43:19.093041 1685746 logs.go:282] 0 containers: []
	W1222 01:43:19.093050 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:43:19.093059 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:43:19.093070 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:43:19.148083 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:43:19.148119 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:43:19.163510 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:43:19.163547 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:43:19.228482 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:43:19.219765   10199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:19.220173   10199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:19.221836   10199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:19.222307   10199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:19.223917   10199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:43:19.219765   10199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:19.220173   10199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:19.221836   10199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:19.222307   10199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:19.223917   10199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:43:19.228505 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:43:19.228519 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:43:19.264345 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:43:19.264402 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:43:21.823213 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:43:21.834353 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:43:21.834427 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:43:21.860777 1685746 cri.go:96] found id: ""
	I1222 01:43:21.860805 1685746 logs.go:282] 0 containers: []
	W1222 01:43:21.860815 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:43:21.860823 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:43:21.860889 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:43:21.889075 1685746 cri.go:96] found id: ""
	I1222 01:43:21.889150 1685746 logs.go:282] 0 containers: []
	W1222 01:43:21.889173 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:43:21.889195 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:43:21.889284 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:43:21.915306 1685746 cri.go:96] found id: ""
	I1222 01:43:21.915334 1685746 logs.go:282] 0 containers: []
	W1222 01:43:21.915343 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:43:21.915349 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:43:21.915413 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:43:21.940239 1685746 cri.go:96] found id: ""
	I1222 01:43:21.940610 1685746 logs.go:282] 0 containers: []
	W1222 01:43:21.940624 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:43:21.940633 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:43:21.940694 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:43:21.966280 1685746 cri.go:96] found id: ""
	I1222 01:43:21.966307 1685746 logs.go:282] 0 containers: []
	W1222 01:43:21.966316 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:43:21.966323 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:43:21.966392 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:43:21.991888 1685746 cri.go:96] found id: ""
	I1222 01:43:21.991916 1685746 logs.go:282] 0 containers: []
	W1222 01:43:21.991925 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:43:21.991934 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:43:21.991993 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:43:22.021851 1685746 cri.go:96] found id: ""
	I1222 01:43:22.021878 1685746 logs.go:282] 0 containers: []
	W1222 01:43:22.021888 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:43:22.021895 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:43:22.021962 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:43:22.052435 1685746 cri.go:96] found id: ""
	I1222 01:43:22.052464 1685746 logs.go:282] 0 containers: []
	W1222 01:43:22.052473 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:43:22.052483 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:43:22.052495 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:43:22.128628 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:43:22.119151   10309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:22.120170   10309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:22.122258   10309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:22.122895   10309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:22.124723   10309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:43:22.119151   10309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:22.120170   10309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:22.122258   10309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:22.122895   10309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:22.124723   10309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:43:22.128653 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:43:22.128668 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:43:22.154140 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:43:22.154180 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:43:22.190762 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:43:22.190790 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:43:22.254223 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:43:22.254264 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:43:24.790679 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:43:24.801308 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:43:24.801380 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:43:24.826466 1685746 cri.go:96] found id: ""
	I1222 01:43:24.826492 1685746 logs.go:282] 0 containers: []
	W1222 01:43:24.826501 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:43:24.826508 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:43:24.826573 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:43:24.852169 1685746 cri.go:96] found id: ""
	I1222 01:43:24.852196 1685746 logs.go:282] 0 containers: []
	W1222 01:43:24.852206 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:43:24.852212 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:43:24.852277 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:43:24.876880 1685746 cri.go:96] found id: ""
	I1222 01:43:24.876906 1685746 logs.go:282] 0 containers: []
	W1222 01:43:24.876915 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:43:24.876922 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:43:24.876986 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:43:24.902741 1685746 cri.go:96] found id: ""
	I1222 01:43:24.902769 1685746 logs.go:282] 0 containers: []
	W1222 01:43:24.902778 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:43:24.902785 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:43:24.902851 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:43:24.928580 1685746 cri.go:96] found id: ""
	I1222 01:43:24.928603 1685746 logs.go:282] 0 containers: []
	W1222 01:43:24.928612 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:43:24.928618 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:43:24.928686 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:43:24.958505 1685746 cri.go:96] found id: ""
	I1222 01:43:24.958533 1685746 logs.go:282] 0 containers: []
	W1222 01:43:24.958542 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:43:24.958548 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:43:24.958610 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:43:24.988354 1685746 cri.go:96] found id: ""
	I1222 01:43:24.988394 1685746 logs.go:282] 0 containers: []
	W1222 01:43:24.988403 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:43:24.988410 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:43:24.988471 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:43:25.022402 1685746 cri.go:96] found id: ""
	I1222 01:43:25.022445 1685746 logs.go:282] 0 containers: []
	W1222 01:43:25.022455 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:43:25.022465 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:43:25.022477 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:43:25.090031 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:43:25.080604   10424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:25.081363   10424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:25.083352   10424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:25.084129   10424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:25.086042   10424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:43:25.080604   10424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:25.081363   10424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:25.083352   10424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:25.084129   10424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:25.086042   10424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:43:25.090122 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:43:25.090152 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:43:25.117050 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:43:25.117090 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:43:25.146413 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:43:25.146443 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:43:25.203377 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:43:25.203415 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:43:27.718901 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:43:27.729888 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:43:27.729962 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:43:27.753619 1685746 cri.go:96] found id: ""
	I1222 01:43:27.753643 1685746 logs.go:282] 0 containers: []
	W1222 01:43:27.753651 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:43:27.753657 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:43:27.753734 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:43:27.778439 1685746 cri.go:96] found id: ""
	I1222 01:43:27.778468 1685746 logs.go:282] 0 containers: []
	W1222 01:43:27.778477 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:43:27.778484 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:43:27.778549 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:43:27.803747 1685746 cri.go:96] found id: ""
	I1222 01:43:27.803776 1685746 logs.go:282] 0 containers: []
	W1222 01:43:27.803786 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:43:27.803792 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:43:27.803851 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:43:27.833272 1685746 cri.go:96] found id: ""
	I1222 01:43:27.833295 1685746 logs.go:282] 0 containers: []
	W1222 01:43:27.833303 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:43:27.833310 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:43:27.833383 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:43:27.858574 1685746 cri.go:96] found id: ""
	I1222 01:43:27.858602 1685746 logs.go:282] 0 containers: []
	W1222 01:43:27.858613 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:43:27.858619 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:43:27.858680 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:43:27.884333 1685746 cri.go:96] found id: ""
	I1222 01:43:27.884361 1685746 logs.go:282] 0 containers: []
	W1222 01:43:27.884418 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:43:27.884434 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:43:27.884509 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:43:27.914000 1685746 cri.go:96] found id: ""
	I1222 01:43:27.914111 1685746 logs.go:282] 0 containers: []
	W1222 01:43:27.914145 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:43:27.914159 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:43:27.914221 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:43:27.939204 1685746 cri.go:96] found id: ""
	I1222 01:43:27.939228 1685746 logs.go:282] 0 containers: []
	W1222 01:43:27.939237 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:43:27.939246 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:43:27.939257 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:43:27.953702 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:43:27.953728 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:43:28.021111 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:43:28.012570   10545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:28.013161   10545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:28.014852   10545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:28.015378   10545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:28.017163   10545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:43:28.012570   10545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:28.013161   10545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:28.014852   10545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:28.015378   10545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:28.017163   10545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:43:28.021131 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:43:28.021144 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:43:28.048052 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:43:28.048090 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:43:28.080739 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:43:28.080776 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:43:30.641402 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:43:30.652837 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:43:30.652908 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:43:30.679700 1685746 cri.go:96] found id: ""
	I1222 01:43:30.679727 1685746 logs.go:282] 0 containers: []
	W1222 01:43:30.679736 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:43:30.679743 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:43:30.679872 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:43:30.708517 1685746 cri.go:96] found id: ""
	I1222 01:43:30.708545 1685746 logs.go:282] 0 containers: []
	W1222 01:43:30.708554 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:43:30.708561 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:43:30.708622 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:43:30.737801 1685746 cri.go:96] found id: ""
	I1222 01:43:30.737829 1685746 logs.go:282] 0 containers: []
	W1222 01:43:30.737838 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:43:30.737845 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:43:30.737916 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:43:30.764096 1685746 cri.go:96] found id: ""
	I1222 01:43:30.764124 1685746 logs.go:282] 0 containers: []
	W1222 01:43:30.764134 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:43:30.764141 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:43:30.764252 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:43:30.789565 1685746 cri.go:96] found id: ""
	I1222 01:43:30.789591 1685746 logs.go:282] 0 containers: []
	W1222 01:43:30.789599 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:43:30.789607 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:43:30.789684 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:43:30.822764 1685746 cri.go:96] found id: ""
	I1222 01:43:30.822833 1685746 logs.go:282] 0 containers: []
	W1222 01:43:30.822857 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:43:30.822871 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:43:30.822957 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:43:30.848727 1685746 cri.go:96] found id: ""
	I1222 01:43:30.848754 1685746 logs.go:282] 0 containers: []
	W1222 01:43:30.848763 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:43:30.848770 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:43:30.848830 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:43:30.876920 1685746 cri.go:96] found id: ""
	I1222 01:43:30.876945 1685746 logs.go:282] 0 containers: []
	W1222 01:43:30.876954 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:43:30.876963 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:43:30.876974 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:43:30.932977 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:43:30.933015 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:43:30.950177 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:43:30.950205 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:43:31.021720 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:43:31.012613   10658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:31.013393   10658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:31.015106   10658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:31.015679   10658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:31.017436   10658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:43:31.012613   10658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:31.013393   10658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:31.015106   10658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:31.015679   10658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:31.017436   10658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:43:31.021745 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:43:31.021757 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:43:31.047873 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:43:31.047908 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:43:33.582285 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:43:33.593589 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:43:33.593677 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:43:33.619720 1685746 cri.go:96] found id: ""
	I1222 01:43:33.619746 1685746 logs.go:282] 0 containers: []
	W1222 01:43:33.619755 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:43:33.619762 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:43:33.619823 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:43:33.644535 1685746 cri.go:96] found id: ""
	I1222 01:43:33.644558 1685746 logs.go:282] 0 containers: []
	W1222 01:43:33.644567 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:43:33.644573 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:43:33.644636 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:43:33.674069 1685746 cri.go:96] found id: ""
	I1222 01:43:33.674133 1685746 logs.go:282] 0 containers: []
	W1222 01:43:33.674144 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:43:33.674151 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:43:33.674216 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:43:33.700076 1685746 cri.go:96] found id: ""
	I1222 01:43:33.700102 1685746 logs.go:282] 0 containers: []
	W1222 01:43:33.700111 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:43:33.700118 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:43:33.700179 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:43:33.725155 1685746 cri.go:96] found id: ""
	I1222 01:43:33.725182 1685746 logs.go:282] 0 containers: []
	W1222 01:43:33.725192 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:43:33.725199 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:43:33.725259 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:43:33.752045 1685746 cri.go:96] found id: ""
	I1222 01:43:33.752120 1685746 logs.go:282] 0 containers: []
	W1222 01:43:33.752144 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:43:33.752166 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:43:33.752270 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:43:33.776869 1685746 cri.go:96] found id: ""
	I1222 01:43:33.776897 1685746 logs.go:282] 0 containers: []
	W1222 01:43:33.776917 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:43:33.776925 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:43:33.776995 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:43:33.804537 1685746 cri.go:96] found id: ""
	I1222 01:43:33.804559 1685746 logs.go:282] 0 containers: []
	W1222 01:43:33.804568 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:43:33.804577 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:43:33.804589 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:43:33.868017 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:43:33.859678   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:33.860434   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:33.862123   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:33.862611   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:33.864178   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:43:33.859678   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:33.860434   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:33.862123   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:33.862611   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:33.864178   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:43:33.868038 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:43:33.868050 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:43:33.893225 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:43:33.893268 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:43:33.925850 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:43:33.925880 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:43:33.984794 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:43:33.984827 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:43:36.500237 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:43:36.517959 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:43:36.518035 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:43:36.566551 1685746 cri.go:96] found id: ""
	I1222 01:43:36.566578 1685746 logs.go:282] 0 containers: []
	W1222 01:43:36.566587 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:43:36.566594 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:43:36.566675 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:43:36.601952 1685746 cri.go:96] found id: ""
	I1222 01:43:36.601979 1685746 logs.go:282] 0 containers: []
	W1222 01:43:36.601988 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:43:36.601994 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:43:36.602069 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:43:36.628093 1685746 cri.go:96] found id: ""
	I1222 01:43:36.628123 1685746 logs.go:282] 0 containers: []
	W1222 01:43:36.628132 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:43:36.628138 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:43:36.628199 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:43:36.653428 1685746 cri.go:96] found id: ""
	I1222 01:43:36.653457 1685746 logs.go:282] 0 containers: []
	W1222 01:43:36.653471 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:43:36.653478 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:43:36.653536 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:43:36.680092 1685746 cri.go:96] found id: ""
	I1222 01:43:36.680115 1685746 logs.go:282] 0 containers: []
	W1222 01:43:36.680124 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:43:36.680130 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:43:36.680189 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:43:36.706982 1685746 cri.go:96] found id: ""
	I1222 01:43:36.707020 1685746 logs.go:282] 0 containers: []
	W1222 01:43:36.707030 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:43:36.707037 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:43:36.707112 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:43:36.731661 1685746 cri.go:96] found id: ""
	I1222 01:43:36.731738 1685746 logs.go:282] 0 containers: []
	W1222 01:43:36.731760 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:43:36.731783 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:43:36.731878 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:43:36.759936 1685746 cri.go:96] found id: ""
	I1222 01:43:36.759958 1685746 logs.go:282] 0 containers: []
	W1222 01:43:36.759966 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:43:36.759975 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:43:36.759986 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:43:36.774574 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:43:36.774601 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:43:36.840390 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:43:36.831270   10885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:36.832097   10885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:36.833923   10885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:36.834552   10885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:36.836202   10885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:43:36.831270   10885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:36.832097   10885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:36.833923   10885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:36.834552   10885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:36.836202   10885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:43:36.840453 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:43:36.840474 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:43:36.865823 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:43:36.865861 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:43:36.895884 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:43:36.895914 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:43:39.451426 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:43:39.462101 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:43:39.462175 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:43:39.492238 1685746 cri.go:96] found id: ""
	I1222 01:43:39.492261 1685746 logs.go:282] 0 containers: []
	W1222 01:43:39.492270 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:43:39.492281 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:43:39.492355 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:43:39.573214 1685746 cri.go:96] found id: ""
	I1222 01:43:39.573236 1685746 logs.go:282] 0 containers: []
	W1222 01:43:39.573244 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:43:39.573251 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:43:39.573323 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:43:39.599147 1685746 cri.go:96] found id: ""
	I1222 01:43:39.599172 1685746 logs.go:282] 0 containers: []
	W1222 01:43:39.599181 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:43:39.599188 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:43:39.599251 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:43:39.624765 1685746 cri.go:96] found id: ""
	I1222 01:43:39.624850 1685746 logs.go:282] 0 containers: []
	W1222 01:43:39.624874 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:43:39.624915 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:43:39.625014 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:43:39.656217 1685746 cri.go:96] found id: ""
	I1222 01:43:39.656244 1685746 logs.go:282] 0 containers: []
	W1222 01:43:39.656253 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:43:39.656260 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:43:39.656349 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:43:39.682103 1685746 cri.go:96] found id: ""
	I1222 01:43:39.682127 1685746 logs.go:282] 0 containers: []
	W1222 01:43:39.682136 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:43:39.682143 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:43:39.682211 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:43:39.707971 1685746 cri.go:96] found id: ""
	I1222 01:43:39.707999 1685746 logs.go:282] 0 containers: []
	W1222 01:43:39.708008 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:43:39.708015 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:43:39.708075 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:43:39.737148 1685746 cri.go:96] found id: ""
	I1222 01:43:39.737175 1685746 logs.go:282] 0 containers: []
	W1222 01:43:39.737184 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:43:39.737194 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:43:39.737210 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:43:39.805404 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:43:39.797552   10995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:39.798323   10995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:39.799828   10995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:39.800272   10995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:39.801768   10995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:43:39.797552   10995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:39.798323   10995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:39.799828   10995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:39.800272   10995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:39.801768   10995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:43:39.805427 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:43:39.805441 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:43:39.835140 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:43:39.835180 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:43:39.864203 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:43:39.864232 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:43:39.919399 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:43:39.919435 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:43:42.434907 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:43:42.447524 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:43:42.447601 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:43:42.474430 1685746 cri.go:96] found id: ""
	I1222 01:43:42.474452 1685746 logs.go:282] 0 containers: []
	W1222 01:43:42.474468 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:43:42.474475 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:43:42.474534 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:43:42.539132 1685746 cri.go:96] found id: ""
	I1222 01:43:42.539154 1685746 logs.go:282] 0 containers: []
	W1222 01:43:42.539178 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:43:42.539186 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:43:42.539287 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:43:42.575001 1685746 cri.go:96] found id: ""
	I1222 01:43:42.575023 1685746 logs.go:282] 0 containers: []
	W1222 01:43:42.575031 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:43:42.575037 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:43:42.575095 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:43:42.599923 1685746 cri.go:96] found id: ""
	I1222 01:43:42.599947 1685746 logs.go:282] 0 containers: []
	W1222 01:43:42.599956 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:43:42.599963 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:43:42.600027 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:43:42.624602 1685746 cri.go:96] found id: ""
	I1222 01:43:42.624630 1685746 logs.go:282] 0 containers: []
	W1222 01:43:42.624640 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:43:42.624646 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:43:42.624707 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:43:42.649899 1685746 cri.go:96] found id: ""
	I1222 01:43:42.649925 1685746 logs.go:282] 0 containers: []
	W1222 01:43:42.649934 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:43:42.649941 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:43:42.650001 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:43:42.675756 1685746 cri.go:96] found id: ""
	I1222 01:43:42.675836 1685746 logs.go:282] 0 containers: []
	W1222 01:43:42.675860 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:43:42.675897 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:43:42.675973 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:43:42.702958 1685746 cri.go:96] found id: ""
	I1222 01:43:42.702995 1685746 logs.go:282] 0 containers: []
	W1222 01:43:42.703005 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:43:42.703014 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:43:42.703025 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:43:42.759487 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:43:42.759526 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:43:42.774803 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:43:42.774835 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:43:42.841752 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:43:42.833129   11114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:42.833574   11114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:42.835289   11114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:42.835969   11114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:42.837683   11114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:43:42.833129   11114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:42.833574   11114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:42.835289   11114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:42.835969   11114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:42.837683   11114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:43:42.841776 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:43:42.841790 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:43:42.868632 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:43:42.868666 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:43:45.400104 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:43:45.410950 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:43:45.411071 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:43:45.436920 1685746 cri.go:96] found id: ""
	I1222 01:43:45.436957 1685746 logs.go:282] 0 containers: []
	W1222 01:43:45.436966 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:43:45.436973 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:43:45.437044 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:43:45.464719 1685746 cri.go:96] found id: ""
	I1222 01:43:45.464755 1685746 logs.go:282] 0 containers: []
	W1222 01:43:45.464765 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:43:45.464771 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:43:45.464841 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:43:45.501180 1685746 cri.go:96] found id: ""
	I1222 01:43:45.501207 1685746 logs.go:282] 0 containers: []
	W1222 01:43:45.501226 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:43:45.501234 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:43:45.501305 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:43:45.547294 1685746 cri.go:96] found id: ""
	I1222 01:43:45.547339 1685746 logs.go:282] 0 containers: []
	W1222 01:43:45.547350 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:43:45.547357 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:43:45.547435 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:43:45.581484 1685746 cri.go:96] found id: ""
	I1222 01:43:45.581526 1685746 logs.go:282] 0 containers: []
	W1222 01:43:45.581535 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:43:45.581542 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:43:45.581613 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:43:45.610563 1685746 cri.go:96] found id: ""
	I1222 01:43:45.610591 1685746 logs.go:282] 0 containers: []
	W1222 01:43:45.610600 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:43:45.610607 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:43:45.610679 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:43:45.637028 1685746 cri.go:96] found id: ""
	I1222 01:43:45.637054 1685746 logs.go:282] 0 containers: []
	W1222 01:43:45.637064 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:43:45.637070 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:43:45.637141 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:43:45.662660 1685746 cri.go:96] found id: ""
	I1222 01:43:45.662740 1685746 logs.go:282] 0 containers: []
	W1222 01:43:45.662756 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:43:45.662767 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:43:45.662779 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:43:45.719167 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:43:45.719208 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:43:45.734405 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:43:45.734438 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:43:45.802645 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:43:45.794241   11226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:45.795082   11226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:45.796666   11226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:45.797286   11226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:45.798945   11226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:43:45.794241   11226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:45.795082   11226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:45.796666   11226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:45.797286   11226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:45.798945   11226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:43:45.802667 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:43:45.802680 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:43:45.829402 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:43:45.829439 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:43:48.362229 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:43:48.372648 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:43:48.372722 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:43:48.399816 1685746 cri.go:96] found id: ""
	I1222 01:43:48.399843 1685746 logs.go:282] 0 containers: []
	W1222 01:43:48.399852 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:43:48.399859 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:43:48.399922 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:43:48.424774 1685746 cri.go:96] found id: ""
	I1222 01:43:48.424800 1685746 logs.go:282] 0 containers: []
	W1222 01:43:48.424809 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:43:48.424816 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:43:48.424873 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:43:48.449402 1685746 cri.go:96] found id: ""
	I1222 01:43:48.449429 1685746 logs.go:282] 0 containers: []
	W1222 01:43:48.449438 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:43:48.449444 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:43:48.449501 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:43:48.481785 1685746 cri.go:96] found id: ""
	I1222 01:43:48.481811 1685746 logs.go:282] 0 containers: []
	W1222 01:43:48.481822 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:43:48.481828 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:43:48.481884 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:43:48.535392 1685746 cri.go:96] found id: ""
	I1222 01:43:48.535421 1685746 logs.go:282] 0 containers: []
	W1222 01:43:48.535429 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:43:48.535435 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:43:48.535495 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:43:48.581091 1685746 cri.go:96] found id: ""
	I1222 01:43:48.581119 1685746 logs.go:282] 0 containers: []
	W1222 01:43:48.581128 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:43:48.581135 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:43:48.581195 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:43:48.608115 1685746 cri.go:96] found id: ""
	I1222 01:43:48.608143 1685746 logs.go:282] 0 containers: []
	W1222 01:43:48.608152 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:43:48.608158 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:43:48.608222 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:43:48.634982 1685746 cri.go:96] found id: ""
	I1222 01:43:48.635007 1685746 logs.go:282] 0 containers: []
	W1222 01:43:48.635015 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:43:48.635024 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:43:48.635040 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:43:48.690980 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:43:48.691017 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:43:48.706101 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:43:48.706126 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:43:48.773880 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:43:48.764854   11337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:48.765671   11337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:48.767568   11337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:48.768241   11337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:48.769856   11337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:43:48.764854   11337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:48.765671   11337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:48.767568   11337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:48.768241   11337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:48.769856   11337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:43:48.773903 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:43:48.773915 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:43:48.798770 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:43:48.798805 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:43:51.326747 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:43:51.337244 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:43:51.337316 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:43:51.361650 1685746 cri.go:96] found id: ""
	I1222 01:43:51.361674 1685746 logs.go:282] 0 containers: []
	W1222 01:43:51.361685 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:43:51.361691 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:43:51.361752 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:43:51.387243 1685746 cri.go:96] found id: ""
	I1222 01:43:51.387267 1685746 logs.go:282] 0 containers: []
	W1222 01:43:51.387275 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:43:51.387282 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:43:51.387339 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:43:51.412051 1685746 cri.go:96] found id: ""
	I1222 01:43:51.412076 1685746 logs.go:282] 0 containers: []
	W1222 01:43:51.412085 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:43:51.412091 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:43:51.412152 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:43:51.442828 1685746 cri.go:96] found id: ""
	I1222 01:43:51.442855 1685746 logs.go:282] 0 containers: []
	W1222 01:43:51.442864 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:43:51.442871 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:43:51.442931 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:43:51.469084 1685746 cri.go:96] found id: ""
	I1222 01:43:51.469111 1685746 logs.go:282] 0 containers: []
	W1222 01:43:51.469120 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:43:51.469128 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:43:51.469196 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:43:51.505900 1685746 cri.go:96] found id: ""
	I1222 01:43:51.505931 1685746 logs.go:282] 0 containers: []
	W1222 01:43:51.505940 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:43:51.505947 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:43:51.506015 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:43:51.544756 1685746 cri.go:96] found id: ""
	I1222 01:43:51.544794 1685746 logs.go:282] 0 containers: []
	W1222 01:43:51.544803 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:43:51.544810 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:43:51.544881 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:43:51.595192 1685746 cri.go:96] found id: ""
	I1222 01:43:51.595274 1685746 logs.go:282] 0 containers: []
	W1222 01:43:51.595308 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:43:51.595330 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:43:51.595370 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:43:51.651780 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:43:51.651815 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:43:51.666583 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:43:51.666611 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:43:51.736962 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:43:51.728914   11451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:51.729300   11451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:51.730869   11451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:51.731559   11451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:51.732987   11451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:43:51.728914   11451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:51.729300   11451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:51.730869   11451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:51.731559   11451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:51.732987   11451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:43:51.736984 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:43:51.736997 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:43:51.763237 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:43:51.763272 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:43:54.292529 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:43:54.303313 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:43:54.303393 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:43:54.329228 1685746 cri.go:96] found id: ""
	I1222 01:43:54.329251 1685746 logs.go:282] 0 containers: []
	W1222 01:43:54.329260 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:43:54.329266 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:43:54.329325 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:43:54.353443 1685746 cri.go:96] found id: ""
	I1222 01:43:54.353478 1685746 logs.go:282] 0 containers: []
	W1222 01:43:54.353488 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:43:54.353495 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:43:54.353565 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:43:54.385463 1685746 cri.go:96] found id: ""
	I1222 01:43:54.385487 1685746 logs.go:282] 0 containers: []
	W1222 01:43:54.385496 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:43:54.385502 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:43:54.385571 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:43:54.413065 1685746 cri.go:96] found id: ""
	I1222 01:43:54.413135 1685746 logs.go:282] 0 containers: []
	W1222 01:43:54.413160 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:43:54.413209 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:43:54.413290 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:43:54.440350 1685746 cri.go:96] found id: ""
	I1222 01:43:54.440376 1685746 logs.go:282] 0 containers: []
	W1222 01:43:54.440385 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:43:54.440391 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:43:54.440469 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:43:54.469549 1685746 cri.go:96] found id: ""
	I1222 01:43:54.469583 1685746 logs.go:282] 0 containers: []
	W1222 01:43:54.469592 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:43:54.469599 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:43:54.469668 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:43:54.514637 1685746 cri.go:96] found id: ""
	I1222 01:43:54.514714 1685746 logs.go:282] 0 containers: []
	W1222 01:43:54.514738 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:43:54.514761 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:43:54.514876 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:43:54.546685 1685746 cri.go:96] found id: ""
	I1222 01:43:54.546708 1685746 logs.go:282] 0 containers: []
	W1222 01:43:54.546717 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:43:54.546726 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:43:54.546737 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:43:54.576240 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:43:54.576324 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:43:54.618824 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:43:54.618853 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:43:54.673867 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:43:54.673900 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:43:54.689028 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:43:54.689057 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:43:54.755999 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:43:54.747746   11573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:54.748476   11573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:54.750175   11573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:54.750710   11573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:54.752333   11573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:43:54.747746   11573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:54.748476   11573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:54.750175   11573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:54.750710   11573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:54.752333   11573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:43:57.257146 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:43:57.268025 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:43:57.268100 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:43:57.292709 1685746 cri.go:96] found id: ""
	I1222 01:43:57.292738 1685746 logs.go:282] 0 containers: []
	W1222 01:43:57.292748 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:43:57.292761 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:43:57.292826 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:43:57.321159 1685746 cri.go:96] found id: ""
	I1222 01:43:57.321186 1685746 logs.go:282] 0 containers: []
	W1222 01:43:57.321195 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:43:57.321201 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:43:57.321264 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:43:57.350573 1685746 cri.go:96] found id: ""
	I1222 01:43:57.350601 1685746 logs.go:282] 0 containers: []
	W1222 01:43:57.350611 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:43:57.350620 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:43:57.350682 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:43:57.380391 1685746 cri.go:96] found id: ""
	I1222 01:43:57.380425 1685746 logs.go:282] 0 containers: []
	W1222 01:43:57.380435 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:43:57.380441 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:43:57.380502 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:43:57.404977 1685746 cri.go:96] found id: ""
	I1222 01:43:57.405003 1685746 logs.go:282] 0 containers: []
	W1222 01:43:57.405012 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:43:57.405018 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:43:57.405080 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:43:57.431206 1685746 cri.go:96] found id: ""
	I1222 01:43:57.431234 1685746 logs.go:282] 0 containers: []
	W1222 01:43:57.431243 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:43:57.431250 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:43:57.431310 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:43:57.458352 1685746 cri.go:96] found id: ""
	I1222 01:43:57.458378 1685746 logs.go:282] 0 containers: []
	W1222 01:43:57.458387 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:43:57.458393 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:43:57.458454 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:43:57.487672 1685746 cri.go:96] found id: ""
	I1222 01:43:57.487700 1685746 logs.go:282] 0 containers: []
	W1222 01:43:57.487709 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:43:57.487718 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:43:57.487729 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:43:57.523843 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:43:57.523925 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:43:57.589400 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:43:57.589476 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:43:57.650987 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:43:57.651025 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:43:57.666115 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:43:57.666151 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:43:57.735484 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:43:57.726997   11685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:57.727531   11685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:57.729337   11685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:57.729718   11685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:57.731296   11685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:43:57.726997   11685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:57.727531   11685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:57.729337   11685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:57.729718   11685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:57.731296   11685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:44:00.237195 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:44:00.303116 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:44:00.303238 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:44:00.349571 1685746 cri.go:96] found id: ""
	I1222 01:44:00.349604 1685746 logs.go:282] 0 containers: []
	W1222 01:44:00.349614 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:44:00.349623 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:44:00.349691 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:44:00.397703 1685746 cri.go:96] found id: ""
	I1222 01:44:00.397728 1685746 logs.go:282] 0 containers: []
	W1222 01:44:00.397757 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:44:00.397772 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:44:00.397869 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:44:00.445846 1685746 cri.go:96] found id: ""
	I1222 01:44:00.445883 1685746 logs.go:282] 0 containers: []
	W1222 01:44:00.445891 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:44:00.445899 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:44:00.445975 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:44:00.481389 1685746 cri.go:96] found id: ""
	I1222 01:44:00.481433 1685746 logs.go:282] 0 containers: []
	W1222 01:44:00.481443 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:44:00.481451 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:44:00.481545 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:44:00.555281 1685746 cri.go:96] found id: ""
	I1222 01:44:00.555323 1685746 logs.go:282] 0 containers: []
	W1222 01:44:00.555333 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:44:00.555339 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:44:00.555417 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:44:00.610523 1685746 cri.go:96] found id: ""
	I1222 01:44:00.610554 1685746 logs.go:282] 0 containers: []
	W1222 01:44:00.610565 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:44:00.610572 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:44:00.610639 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:44:00.640211 1685746 cri.go:96] found id: ""
	I1222 01:44:00.640242 1685746 logs.go:282] 0 containers: []
	W1222 01:44:00.640252 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:44:00.640261 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:44:00.640334 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:44:00.672011 1685746 cri.go:96] found id: ""
	I1222 01:44:00.672037 1685746 logs.go:282] 0 containers: []
	W1222 01:44:00.672046 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:44:00.672055 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:44:00.672067 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:44:00.730908 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:44:00.730946 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:44:00.746205 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:44:00.746280 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:44:00.814946 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:44:00.806290   11785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:00.807084   11785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:00.808801   11785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:00.809407   11785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:00.810977   11785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:44:00.806290   11785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:00.807084   11785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:00.808801   11785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:00.809407   11785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:00.810977   11785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:44:00.814969 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:44:00.814982 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:44:00.841341 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:44:00.841376 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:44:03.372817 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:44:03.383361 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:44:03.383438 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:44:03.407536 1685746 cri.go:96] found id: ""
	I1222 01:44:03.407558 1685746 logs.go:282] 0 containers: []
	W1222 01:44:03.407566 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:44:03.407572 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:44:03.407631 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:44:03.433092 1685746 cri.go:96] found id: ""
	I1222 01:44:03.433120 1685746 logs.go:282] 0 containers: []
	W1222 01:44:03.433129 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:44:03.433135 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:44:03.433193 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:44:03.462721 1685746 cri.go:96] found id: ""
	I1222 01:44:03.462750 1685746 logs.go:282] 0 containers: []
	W1222 01:44:03.462759 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:44:03.462765 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:44:03.462824 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:44:03.512849 1685746 cri.go:96] found id: ""
	I1222 01:44:03.512871 1685746 logs.go:282] 0 containers: []
	W1222 01:44:03.512880 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:44:03.512887 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:44:03.512946 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:44:03.574191 1685746 cri.go:96] found id: ""
	I1222 01:44:03.574217 1685746 logs.go:282] 0 containers: []
	W1222 01:44:03.574226 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:44:03.574232 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:44:03.574299 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:44:03.600756 1685746 cri.go:96] found id: ""
	I1222 01:44:03.600785 1685746 logs.go:282] 0 containers: []
	W1222 01:44:03.600794 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:44:03.600801 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:44:03.600865 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:44:03.627524 1685746 cri.go:96] found id: ""
	I1222 01:44:03.627554 1685746 logs.go:282] 0 containers: []
	W1222 01:44:03.627564 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:44:03.627571 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:44:03.627632 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:44:03.652207 1685746 cri.go:96] found id: ""
	I1222 01:44:03.652230 1685746 logs.go:282] 0 containers: []
	W1222 01:44:03.652239 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:44:03.652248 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:44:03.652258 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:44:03.710392 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:44:03.710427 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:44:03.725850 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:44:03.725877 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:44:03.793641 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:44:03.785409   11901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:03.785944   11901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:03.787476   11901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:03.787975   11901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:03.789432   11901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:44:03.785409   11901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:03.785944   11901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:03.787476   11901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:03.787975   11901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:03.789432   11901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:44:03.793708 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:44:03.793725 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:44:03.819086 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:44:03.819122 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:44:06.350666 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:44:06.361704 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:44:06.361772 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:44:06.387959 1685746 cri.go:96] found id: ""
	I1222 01:44:06.387985 1685746 logs.go:282] 0 containers: []
	W1222 01:44:06.387994 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:44:06.388001 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:44:06.388063 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:44:06.420195 1685746 cri.go:96] found id: ""
	I1222 01:44:06.420229 1685746 logs.go:282] 0 containers: []
	W1222 01:44:06.420239 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:44:06.420245 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:44:06.420318 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:44:06.444201 1685746 cri.go:96] found id: ""
	I1222 01:44:06.444228 1685746 logs.go:282] 0 containers: []
	W1222 01:44:06.444237 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:44:06.444244 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:44:06.444326 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:44:06.469606 1685746 cri.go:96] found id: ""
	I1222 01:44:06.469635 1685746 logs.go:282] 0 containers: []
	W1222 01:44:06.469644 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:44:06.469650 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:44:06.469714 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:44:06.516673 1685746 cri.go:96] found id: ""
	I1222 01:44:06.516703 1685746 logs.go:282] 0 containers: []
	W1222 01:44:06.516712 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:44:06.516719 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:44:06.516783 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:44:06.554976 1685746 cri.go:96] found id: ""
	I1222 01:44:06.555004 1685746 logs.go:282] 0 containers: []
	W1222 01:44:06.555014 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:44:06.555020 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:44:06.555079 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:44:06.587406 1685746 cri.go:96] found id: ""
	I1222 01:44:06.587434 1685746 logs.go:282] 0 containers: []
	W1222 01:44:06.587443 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:44:06.587449 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:44:06.587511 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:44:06.620595 1685746 cri.go:96] found id: ""
	I1222 01:44:06.620623 1685746 logs.go:282] 0 containers: []
	W1222 01:44:06.620633 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:44:06.620642 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:44:06.620655 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:44:06.677532 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:44:06.677567 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:44:06.692910 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:44:06.692987 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:44:06.760398 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:44:06.750827   12014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:06.751480   12014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:06.753963   12014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:06.754997   12014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:06.755795   12014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:44:06.750827   12014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:06.751480   12014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:06.753963   12014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:06.754997   12014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:06.755795   12014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:44:06.760423 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:44:06.760436 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:44:06.785709 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:44:06.785743 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:44:09.314372 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:44:09.325259 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:44:09.325349 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:44:09.350687 1685746 cri.go:96] found id: ""
	I1222 01:44:09.350712 1685746 logs.go:282] 0 containers: []
	W1222 01:44:09.350726 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:44:09.350733 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:44:09.350794 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:44:09.376225 1685746 cri.go:96] found id: ""
	I1222 01:44:09.376252 1685746 logs.go:282] 0 containers: []
	W1222 01:44:09.376260 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:44:09.376267 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:44:09.376332 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:44:09.402898 1685746 cri.go:96] found id: ""
	I1222 01:44:09.402922 1685746 logs.go:282] 0 containers: []
	W1222 01:44:09.402931 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:44:09.402937 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:44:09.403008 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:44:09.428038 1685746 cri.go:96] found id: ""
	I1222 01:44:09.428066 1685746 logs.go:282] 0 containers: []
	W1222 01:44:09.428075 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:44:09.428082 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:44:09.428150 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:44:09.456772 1685746 cri.go:96] found id: ""
	I1222 01:44:09.456798 1685746 logs.go:282] 0 containers: []
	W1222 01:44:09.456806 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:44:09.456813 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:44:09.456871 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:44:09.484926 1685746 cri.go:96] found id: ""
	I1222 01:44:09.484953 1685746 logs.go:282] 0 containers: []
	W1222 01:44:09.484962 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:44:09.484968 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:44:09.485029 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:44:09.521247 1685746 cri.go:96] found id: ""
	I1222 01:44:09.521276 1685746 logs.go:282] 0 containers: []
	W1222 01:44:09.521285 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:44:09.521292 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:44:09.521361 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:44:09.559247 1685746 cri.go:96] found id: ""
	I1222 01:44:09.559283 1685746 logs.go:282] 0 containers: []
	W1222 01:44:09.559292 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:44:09.559301 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:44:09.559313 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:44:09.576452 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:44:09.576488 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:44:09.647498 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:44:09.639570   12128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:09.640313   12128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:09.641853   12128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:09.642396   12128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:09.643920   12128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:44:09.639570   12128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:09.640313   12128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:09.641853   12128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:09.642396   12128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:09.643920   12128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:44:09.647522 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:44:09.647535 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:44:09.672763 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:44:09.672799 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:44:09.703339 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:44:09.703367 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:44:12.258428 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:44:12.269740 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:44:12.269827 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:44:12.295142 1685746 cri.go:96] found id: ""
	I1222 01:44:12.295166 1685746 logs.go:282] 0 containers: []
	W1222 01:44:12.295174 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:44:12.295181 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:44:12.295239 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:44:12.324426 1685746 cri.go:96] found id: ""
	I1222 01:44:12.324453 1685746 logs.go:282] 0 containers: []
	W1222 01:44:12.324462 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:44:12.324468 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:44:12.324528 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:44:12.352908 1685746 cri.go:96] found id: ""
	I1222 01:44:12.352936 1685746 logs.go:282] 0 containers: []
	W1222 01:44:12.352945 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:44:12.352952 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:44:12.353016 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:44:12.382056 1685746 cri.go:96] found id: ""
	I1222 01:44:12.382106 1685746 logs.go:282] 0 containers: []
	W1222 01:44:12.382115 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:44:12.382122 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:44:12.382184 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:44:12.405895 1685746 cri.go:96] found id: ""
	I1222 01:44:12.405926 1685746 logs.go:282] 0 containers: []
	W1222 01:44:12.405935 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:44:12.405941 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:44:12.406063 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:44:12.432020 1685746 cri.go:96] found id: ""
	I1222 01:44:12.432046 1685746 logs.go:282] 0 containers: []
	W1222 01:44:12.432055 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:44:12.432062 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:44:12.432167 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:44:12.460268 1685746 cri.go:96] found id: ""
	I1222 01:44:12.460316 1685746 logs.go:282] 0 containers: []
	W1222 01:44:12.460325 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:44:12.460332 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:44:12.460391 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:44:12.510214 1685746 cri.go:96] found id: ""
	I1222 01:44:12.510243 1685746 logs.go:282] 0 containers: []
	W1222 01:44:12.510252 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:44:12.510261 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:44:12.510281 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:44:12.574866 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:44:12.574895 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:44:12.630459 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:44:12.630495 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:44:12.645639 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:44:12.645667 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:44:12.715658 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:44:12.707654   12252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:12.708218   12252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:12.709705   12252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:12.710254   12252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:12.711749   12252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:44:12.707654   12252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:12.708218   12252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:12.709705   12252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:12.710254   12252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:12.711749   12252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:44:12.715678 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:44:12.715691 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:44:15.242028 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:44:15.253031 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:44:15.253105 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:44:15.283751 1685746 cri.go:96] found id: ""
	I1222 01:44:15.283784 1685746 logs.go:282] 0 containers: []
	W1222 01:44:15.283794 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:44:15.283800 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:44:15.283865 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:44:15.308803 1685746 cri.go:96] found id: ""
	I1222 01:44:15.308830 1685746 logs.go:282] 0 containers: []
	W1222 01:44:15.308840 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:44:15.308846 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:44:15.308911 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:44:15.334334 1685746 cri.go:96] found id: ""
	I1222 01:44:15.334362 1685746 logs.go:282] 0 containers: []
	W1222 01:44:15.334371 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:44:15.334378 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:44:15.334437 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:44:15.363819 1685746 cri.go:96] found id: ""
	I1222 01:44:15.363843 1685746 logs.go:282] 0 containers: []
	W1222 01:44:15.363852 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:44:15.363859 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:44:15.363920 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:44:15.389166 1685746 cri.go:96] found id: ""
	I1222 01:44:15.389194 1685746 logs.go:282] 0 containers: []
	W1222 01:44:15.389203 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:44:15.389211 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:44:15.389275 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:44:15.418948 1685746 cri.go:96] found id: ""
	I1222 01:44:15.419022 1685746 logs.go:282] 0 containers: []
	W1222 01:44:15.419035 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:44:15.419042 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:44:15.419135 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:44:15.446013 1685746 cri.go:96] found id: ""
	I1222 01:44:15.446105 1685746 logs.go:282] 0 containers: []
	W1222 01:44:15.446130 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:44:15.446162 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:44:15.446236 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:44:15.470779 1685746 cri.go:96] found id: ""
	I1222 01:44:15.470806 1685746 logs.go:282] 0 containers: []
	W1222 01:44:15.470815 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:44:15.470825 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:44:15.470857 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:44:15.551154 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:44:15.551246 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:44:15.578834 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:44:15.578861 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:44:15.644949 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:44:15.637144   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:15.637770   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:15.639327   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:15.639800   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:15.641301   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:44:15.637144   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:15.637770   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:15.639327   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:15.639800   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:15.641301   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:44:15.644969 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:44:15.644981 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:44:15.670551 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:44:15.670585 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:44:18.202679 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:44:18.213735 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:44:18.213812 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:44:18.239304 1685746 cri.go:96] found id: ""
	I1222 01:44:18.239327 1685746 logs.go:282] 0 containers: []
	W1222 01:44:18.239336 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:44:18.239342 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:44:18.239401 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:44:18.265064 1685746 cri.go:96] found id: ""
	I1222 01:44:18.265089 1685746 logs.go:282] 0 containers: []
	W1222 01:44:18.265098 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:44:18.265104 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:44:18.265165 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:44:18.290606 1685746 cri.go:96] found id: ""
	I1222 01:44:18.290642 1685746 logs.go:282] 0 containers: []
	W1222 01:44:18.290652 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:44:18.290659 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:44:18.290734 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:44:18.317208 1685746 cri.go:96] found id: ""
	I1222 01:44:18.317231 1685746 logs.go:282] 0 containers: []
	W1222 01:44:18.317240 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:44:18.317246 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:44:18.317306 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:44:18.342186 1685746 cri.go:96] found id: ""
	I1222 01:44:18.342207 1685746 logs.go:282] 0 containers: []
	W1222 01:44:18.342216 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:44:18.342222 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:44:18.342280 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:44:18.367436 1685746 cri.go:96] found id: ""
	I1222 01:44:18.367468 1685746 logs.go:282] 0 containers: []
	W1222 01:44:18.367477 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:44:18.367484 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:44:18.367572 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:44:18.392591 1685746 cri.go:96] found id: ""
	I1222 01:44:18.392616 1685746 logs.go:282] 0 containers: []
	W1222 01:44:18.392625 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:44:18.392632 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:44:18.392691 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:44:18.417782 1685746 cri.go:96] found id: ""
	I1222 01:44:18.417820 1685746 logs.go:282] 0 containers: []
	W1222 01:44:18.417829 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:44:18.417838 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:44:18.417850 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:44:18.475370 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:44:18.475402 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:44:18.496693 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:44:18.496722 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:44:18.602667 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:44:18.591806   12468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:18.592517   12468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:18.594327   12468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:18.595360   12468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:18.595864   12468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:44:18.591806   12468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:18.592517   12468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:18.594327   12468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:18.595360   12468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:18.595864   12468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:44:18.602690 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:44:18.602704 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:44:18.628074 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:44:18.628158 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:44:21.160991 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:44:21.171843 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:44:21.171925 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:44:21.197008 1685746 cri.go:96] found id: ""
	I1222 01:44:21.197035 1685746 logs.go:282] 0 containers: []
	W1222 01:44:21.197045 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:44:21.197051 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:44:21.197111 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:44:21.222701 1685746 cri.go:96] found id: ""
	I1222 01:44:21.222731 1685746 logs.go:282] 0 containers: []
	W1222 01:44:21.222740 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:44:21.222747 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:44:21.222812 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:44:21.247835 1685746 cri.go:96] found id: ""
	I1222 01:44:21.247858 1685746 logs.go:282] 0 containers: []
	W1222 01:44:21.247867 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:44:21.247874 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:44:21.247932 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:44:21.272366 1685746 cri.go:96] found id: ""
	I1222 01:44:21.272400 1685746 logs.go:282] 0 containers: []
	W1222 01:44:21.272411 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:44:21.272418 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:44:21.272483 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:44:21.297348 1685746 cri.go:96] found id: ""
	I1222 01:44:21.297375 1685746 logs.go:282] 0 containers: []
	W1222 01:44:21.297384 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:44:21.297391 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:44:21.297449 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:44:21.321989 1685746 cri.go:96] found id: ""
	I1222 01:44:21.322013 1685746 logs.go:282] 0 containers: []
	W1222 01:44:21.322022 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:44:21.322029 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:44:21.322112 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:44:21.350652 1685746 cri.go:96] found id: ""
	I1222 01:44:21.350677 1685746 logs.go:282] 0 containers: []
	W1222 01:44:21.350685 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:44:21.350691 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:44:21.350754 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:44:21.382678 1685746 cri.go:96] found id: ""
	I1222 01:44:21.382748 1685746 logs.go:282] 0 containers: []
	W1222 01:44:21.382773 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:44:21.382791 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:44:21.382804 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:44:21.438683 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:44:21.438718 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:44:21.453712 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:44:21.453745 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:44:21.571593 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:44:21.558586   12575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:21.559284   12575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:21.561942   12575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:21.562734   12575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:21.565012   12575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:44:21.558586   12575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:21.559284   12575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:21.561942   12575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:21.562734   12575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:21.565012   12575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:44:21.571621 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:44:21.571635 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:44:21.598254 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:44:21.598290 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:44:24.133046 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:44:24.144639 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:44:24.144716 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:44:24.170797 1685746 cri.go:96] found id: ""
	I1222 01:44:24.170821 1685746 logs.go:282] 0 containers: []
	W1222 01:44:24.170830 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:44:24.170838 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:44:24.170901 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:44:24.198790 1685746 cri.go:96] found id: ""
	I1222 01:44:24.198813 1685746 logs.go:282] 0 containers: []
	W1222 01:44:24.198822 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:44:24.198830 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:44:24.198892 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:44:24.223222 1685746 cri.go:96] found id: ""
	I1222 01:44:24.223245 1685746 logs.go:282] 0 containers: []
	W1222 01:44:24.223253 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:44:24.223259 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:44:24.223317 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:44:24.248490 1685746 cri.go:96] found id: ""
	I1222 01:44:24.248573 1685746 logs.go:282] 0 containers: []
	W1222 01:44:24.248590 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:44:24.248598 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:44:24.248678 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:44:24.273541 1685746 cri.go:96] found id: ""
	I1222 01:44:24.273570 1685746 logs.go:282] 0 containers: []
	W1222 01:44:24.273578 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:44:24.273585 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:44:24.273647 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:44:24.298819 1685746 cri.go:96] found id: ""
	I1222 01:44:24.298847 1685746 logs.go:282] 0 containers: []
	W1222 01:44:24.298856 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:44:24.298863 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:44:24.298921 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:44:24.324215 1685746 cri.go:96] found id: ""
	I1222 01:44:24.324316 1685746 logs.go:282] 0 containers: []
	W1222 01:44:24.324334 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:44:24.324341 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:44:24.324420 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:44:24.349700 1685746 cri.go:96] found id: ""
	I1222 01:44:24.349727 1685746 logs.go:282] 0 containers: []
	W1222 01:44:24.349736 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:44:24.349745 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:44:24.349756 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:44:24.405384 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:44:24.405419 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:44:24.420496 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:44:24.420524 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:44:24.481353 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:44:24.473051   12686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:24.473892   12686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:24.475622   12686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:24.475956   12686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:24.477524   12686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:44:24.473051   12686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:24.473892   12686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:24.475622   12686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:24.475956   12686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:24.477524   12686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:44:24.481378 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:44:24.481392 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:44:24.507731 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:44:24.508076 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:44:27.051455 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:44:27.062328 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:44:27.062402 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:44:27.088764 1685746 cri.go:96] found id: ""
	I1222 01:44:27.088786 1685746 logs.go:282] 0 containers: []
	W1222 01:44:27.088795 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:44:27.088801 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:44:27.088859 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:44:27.113929 1685746 cri.go:96] found id: ""
	I1222 01:44:27.113951 1685746 logs.go:282] 0 containers: []
	W1222 01:44:27.113959 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:44:27.113966 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:44:27.114027 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:44:27.139537 1685746 cri.go:96] found id: ""
	I1222 01:44:27.139562 1685746 logs.go:282] 0 containers: []
	W1222 01:44:27.139577 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:44:27.139584 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:44:27.139645 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:44:27.164769 1685746 cri.go:96] found id: ""
	I1222 01:44:27.164792 1685746 logs.go:282] 0 containers: []
	W1222 01:44:27.164800 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:44:27.164807 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:44:27.164867 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:44:27.190396 1685746 cri.go:96] found id: ""
	I1222 01:44:27.190424 1685746 logs.go:282] 0 containers: []
	W1222 01:44:27.190433 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:44:27.190440 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:44:27.190503 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:44:27.215574 1685746 cri.go:96] found id: ""
	I1222 01:44:27.215599 1685746 logs.go:282] 0 containers: []
	W1222 01:44:27.215608 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:44:27.215616 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:44:27.215684 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:44:27.246139 1685746 cri.go:96] found id: ""
	I1222 01:44:27.246162 1685746 logs.go:282] 0 containers: []
	W1222 01:44:27.246172 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:44:27.246178 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:44:27.246239 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:44:27.272153 1685746 cri.go:96] found id: ""
	I1222 01:44:27.272177 1685746 logs.go:282] 0 containers: []
	W1222 01:44:27.272185 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:44:27.272193 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:44:27.272205 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:44:27.303523 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:44:27.303552 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:44:27.363938 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:44:27.363985 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:44:27.380130 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:44:27.380163 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:44:27.443113 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:44:27.434806   12809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:27.435316   12809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:27.437169   12809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:27.437629   12809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:27.439083   12809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:44:27.434806   12809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:27.435316   12809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:27.437169   12809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:27.437629   12809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:27.439083   12809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:44:27.443137 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:44:27.443149 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:44:29.969751 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:44:29.980564 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:44:29.980638 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:44:30.027489 1685746 cri.go:96] found id: ""
	I1222 01:44:30.027515 1685746 logs.go:282] 0 containers: []
	W1222 01:44:30.027524 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:44:30.027532 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:44:30.027604 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:44:30.063116 1685746 cri.go:96] found id: ""
	I1222 01:44:30.063142 1685746 logs.go:282] 0 containers: []
	W1222 01:44:30.063152 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:44:30.063160 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:44:30.063229 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:44:30.111428 1685746 cri.go:96] found id: ""
	I1222 01:44:30.111455 1685746 logs.go:282] 0 containers: []
	W1222 01:44:30.111466 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:44:30.111473 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:44:30.111543 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:44:30.142346 1685746 cri.go:96] found id: ""
	I1222 01:44:30.142381 1685746 logs.go:282] 0 containers: []
	W1222 01:44:30.142391 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:44:30.142406 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:44:30.142499 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:44:30.171044 1685746 cri.go:96] found id: ""
	I1222 01:44:30.171068 1685746 logs.go:282] 0 containers: []
	W1222 01:44:30.171078 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:44:30.171084 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:44:30.171150 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:44:30.206010 1685746 cri.go:96] found id: ""
	I1222 01:44:30.206034 1685746 logs.go:282] 0 containers: []
	W1222 01:44:30.206044 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:44:30.206051 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:44:30.206225 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:44:30.235230 1685746 cri.go:96] found id: ""
	I1222 01:44:30.235255 1685746 logs.go:282] 0 containers: []
	W1222 01:44:30.235264 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:44:30.235272 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:44:30.235404 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:44:30.262624 1685746 cri.go:96] found id: ""
	I1222 01:44:30.262651 1685746 logs.go:282] 0 containers: []
	W1222 01:44:30.262661 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:44:30.262671 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:44:30.262689 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:44:30.320010 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:44:30.320048 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:44:30.336273 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:44:30.336303 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:44:30.407334 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:44:30.398947   12913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:30.399729   12913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:30.401509   12913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:30.402016   12913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:30.403552   12913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:44:30.398947   12913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:30.399729   12913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:30.401509   12913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:30.402016   12913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:30.403552   12913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:44:30.407358 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:44:30.407373 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:44:30.432976 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:44:30.433010 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:44:32.965996 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:44:32.976893 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:44:32.976972 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:44:33.004108 1685746 cri.go:96] found id: ""
	I1222 01:44:33.004138 1685746 logs.go:282] 0 containers: []
	W1222 01:44:33.004149 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:44:33.004157 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:44:33.004293 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:44:33.032305 1685746 cri.go:96] found id: ""
	I1222 01:44:33.032333 1685746 logs.go:282] 0 containers: []
	W1222 01:44:33.032343 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:44:33.032350 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:44:33.032410 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:44:33.060572 1685746 cri.go:96] found id: ""
	I1222 01:44:33.060600 1685746 logs.go:282] 0 containers: []
	W1222 01:44:33.060610 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:44:33.060616 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:44:33.060680 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:44:33.086067 1685746 cri.go:96] found id: ""
	I1222 01:44:33.086112 1685746 logs.go:282] 0 containers: []
	W1222 01:44:33.086122 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:44:33.086129 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:44:33.086188 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:44:33.112283 1685746 cri.go:96] found id: ""
	I1222 01:44:33.112310 1685746 logs.go:282] 0 containers: []
	W1222 01:44:33.112320 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:44:33.112326 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:44:33.112390 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:44:33.143337 1685746 cri.go:96] found id: ""
	I1222 01:44:33.143363 1685746 logs.go:282] 0 containers: []
	W1222 01:44:33.143372 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:44:33.143379 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:44:33.143441 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:44:33.169224 1685746 cri.go:96] found id: ""
	I1222 01:44:33.169250 1685746 logs.go:282] 0 containers: []
	W1222 01:44:33.169259 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:44:33.169267 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:44:33.169327 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:44:33.198401 1685746 cri.go:96] found id: ""
	I1222 01:44:33.198422 1685746 logs.go:282] 0 containers: []
	W1222 01:44:33.198431 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:44:33.198440 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:44:33.198451 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:44:33.256328 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:44:33.256364 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:44:33.271899 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:44:33.271930 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:44:33.338753 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:44:33.330214   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:33.330943   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:33.332678   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:33.333280   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:33.335093   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:44:33.330214   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:33.330943   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:33.332678   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:33.333280   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:33.335093   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:44:33.338786 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:44:33.338800 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:44:33.364007 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:44:33.364042 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:44:35.895269 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:44:35.906191 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:44:35.906266 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:44:35.931271 1685746 cri.go:96] found id: ""
	I1222 01:44:35.931297 1685746 logs.go:282] 0 containers: []
	W1222 01:44:35.931306 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:44:35.931313 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:44:35.931372 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:44:35.958259 1685746 cri.go:96] found id: ""
	I1222 01:44:35.958289 1685746 logs.go:282] 0 containers: []
	W1222 01:44:35.958298 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:44:35.958312 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:44:35.958414 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:44:35.982836 1685746 cri.go:96] found id: ""
	I1222 01:44:35.982861 1685746 logs.go:282] 0 containers: []
	W1222 01:44:35.982871 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:44:35.982877 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:44:35.982937 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:44:36.012610 1685746 cri.go:96] found id: ""
	I1222 01:44:36.012642 1685746 logs.go:282] 0 containers: []
	W1222 01:44:36.012652 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:44:36.012659 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:44:36.012739 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:44:36.039888 1685746 cri.go:96] found id: ""
	I1222 01:44:36.039914 1685746 logs.go:282] 0 containers: []
	W1222 01:44:36.039924 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:44:36.039933 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:44:36.039995 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:44:36.070115 1685746 cri.go:96] found id: ""
	I1222 01:44:36.070144 1685746 logs.go:282] 0 containers: []
	W1222 01:44:36.070153 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:44:36.070160 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:44:36.070220 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:44:36.095790 1685746 cri.go:96] found id: ""
	I1222 01:44:36.095871 1685746 logs.go:282] 0 containers: []
	W1222 01:44:36.095887 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:44:36.095896 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:44:36.095967 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:44:36.122442 1685746 cri.go:96] found id: ""
	I1222 01:44:36.122519 1685746 logs.go:282] 0 containers: []
	W1222 01:44:36.122531 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:44:36.122570 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:44:36.122585 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:44:36.151370 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:44:36.151396 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:44:36.206896 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:44:36.206937 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:44:36.222382 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:44:36.222413 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:44:36.290888 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:44:36.282237   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:36.282881   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:36.284438   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:36.284963   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:36.286512   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:44:36.282237   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:36.282881   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:36.284438   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:36.284963   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:36.286512   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:44:36.290912 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:44:36.290927 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:44:38.822770 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:44:38.837195 1685746 out.go:203] 
	W1222 01:44:38.840003 1685746 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1222 01:44:38.840044 1685746 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	* Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1222 01:44:38.840057 1685746 out.go:285] * Related issues:
	* Related issues:
	W1222 01:44:38.840077 1685746 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	  - https://github.com/kubernetes/minikube/issues/4536
	W1222 01:44:38.840096 1685746 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	  - https://github.com/kubernetes/minikube/issues/6014
	I1222 01:44:38.842944 1685746 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p newest-cni-869293 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1": exit status 105
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-869293
helpers_test.go:244: (dbg) docker inspect newest-cni-869293:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "05e1fe12904ba3e2f69bb80f55f1eaac2e2f59b2b380c077c133943a7ff0a16e",
	        "Created": "2025-12-22T01:28:35.561963158Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1685878,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-22T01:38:31.964858425Z",
	            "FinishedAt": "2025-12-22T01:38:30.65991944Z"
	        },
	        "Image": "sha256:065a636b8735485f57df2b02ed6532902f189a9c5dc304ae0ae68a778e1c9b2c",
	        "ResolvConfPath": "/var/lib/docker/containers/05e1fe12904ba3e2f69bb80f55f1eaac2e2f59b2b380c077c133943a7ff0a16e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/05e1fe12904ba3e2f69bb80f55f1eaac2e2f59b2b380c077c133943a7ff0a16e/hostname",
	        "HostsPath": "/var/lib/docker/containers/05e1fe12904ba3e2f69bb80f55f1eaac2e2f59b2b380c077c133943a7ff0a16e/hosts",
	        "LogPath": "/var/lib/docker/containers/05e1fe12904ba3e2f69bb80f55f1eaac2e2f59b2b380c077c133943a7ff0a16e/05e1fe12904ba3e2f69bb80f55f1eaac2e2f59b2b380c077c133943a7ff0a16e-json.log",
	        "Name": "/newest-cni-869293",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-869293:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-869293",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "05e1fe12904ba3e2f69bb80f55f1eaac2e2f59b2b380c077c133943a7ff0a16e",
	                "LowerDir": "/var/lib/docker/overlay2/158fbaf3ed5d82f864f18e0e961f02de684f670ee24faf71f1ec33887e356946-init/diff:/var/lib/docker/overlay2/fdfc0dccbb3b6766b7981a5669907f62e120bbe774767ccbee34e6115374625f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/158fbaf3ed5d82f864f18e0e961f02de684f670ee24faf71f1ec33887e356946/merged",
	                "UpperDir": "/var/lib/docker/overlay2/158fbaf3ed5d82f864f18e0e961f02de684f670ee24faf71f1ec33887e356946/diff",
	                "WorkDir": "/var/lib/docker/overlay2/158fbaf3ed5d82f864f18e0e961f02de684f670ee24faf71f1ec33887e356946/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-869293",
	                "Source": "/var/lib/docker/volumes/newest-cni-869293/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-869293",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-869293",
	                "name.minikube.sigs.k8s.io": "newest-cni-869293",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e62360fe6e0fa793fd3d0004ae901a019cba72f07e506d4e4de6097400773d18",
	            "SandboxKey": "/var/run/docker/netns/e62360fe6e0f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38707"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38708"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38711"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38709"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38710"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-869293": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "06:95:8a:54:97:ec",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "237b6ac5b33ea8f647685859c16cf161283b5f3d52eea65816f2e7dfeb4ec191",
	                    "EndpointID": "5a4926332b20d8c327aefbaecbda7375782c9a567c1a86203a3a41986fbfb8d5",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-869293",
	                        "05e1fe12904b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-869293 -n newest-cni-869293
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-869293 -n newest-cni-869293: exit status 2 (374.888717ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-869293 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-869293 logs -n 25: (1.538290357s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───
──────────────────┐
	│ COMMAND │                                                                                                                           ARGS                                                                                                                           │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───
──────────────────┤
	│ image   │ default-k8s-diff-port-778490 image list --format=json                                                                                                                                                                                                    │ default-k8s-diff-port-778490 │ jenkins │ v1.37.0 │ 22 Dec 25 01:25 UTC │ 22 Dec 25 01:25 UTC │
	│ pause   │ -p default-k8s-diff-port-778490 --alsologtostderr -v=1                                                                                                                                                                                                   │ default-k8s-diff-port-778490 │ jenkins │ v1.37.0 │ 22 Dec 25 01:25 UTC │ 22 Dec 25 01:25 UTC │
	│ unpause │ -p default-k8s-diff-port-778490 --alsologtostderr -v=1                                                                                                                                                                                                   │ default-k8s-diff-port-778490 │ jenkins │ v1.37.0 │ 22 Dec 25 01:25 UTC │ 22 Dec 25 01:25 UTC │
	│ delete  │ -p default-k8s-diff-port-778490                                                                                                                                                                                                                          │ default-k8s-diff-port-778490 │ jenkins │ v1.37.0 │ 22 Dec 25 01:25 UTC │ 22 Dec 25 01:26 UTC │
	│ delete  │ -p default-k8s-diff-port-778490                                                                                                                                                                                                                          │ default-k8s-diff-port-778490 │ jenkins │ v1.37.0 │ 22 Dec 25 01:26 UTC │ 22 Dec 25 01:26 UTC │
	│ delete  │ -p disable-driver-mounts-459348                                                                                                                                                                                                                          │ disable-driver-mounts-459348 │ jenkins │ v1.37.0 │ 22 Dec 25 01:26 UTC │ 22 Dec 25 01:26 UTC │
	│ start   │ -p no-preload-154186 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-154186            │ jenkins │ v1.37.0 │ 22 Dec 25 01:26 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-980842 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                 │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:27 UTC │ 22 Dec 25 01:27 UTC │
	│ stop    │ -p embed-certs-980842 --alsologtostderr -v=3                                                                                                                                                                                                             │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:27 UTC │ 22 Dec 25 01:27 UTC │
	│ addons  │ enable dashboard -p embed-certs-980842 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                            │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:27 UTC │ 22 Dec 25 01:27 UTC │
	│ start   │ -p embed-certs-980842 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                                             │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:27 UTC │ 22 Dec 25 01:28 UTC │
	│ image   │ embed-certs-980842 image list --format=json                                                                                                                                                                                                              │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:28 UTC │ 22 Dec 25 01:28 UTC │
	│ pause   │ -p embed-certs-980842 --alsologtostderr -v=1                                                                                                                                                                                                             │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:28 UTC │ 22 Dec 25 01:28 UTC │
	│ unpause │ -p embed-certs-980842 --alsologtostderr -v=1                                                                                                                                                                                                             │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:28 UTC │ 22 Dec 25 01:28 UTC │
	│ delete  │ -p embed-certs-980842                                                                                                                                                                                                                                    │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:28 UTC │ 22 Dec 25 01:28 UTC │
	│ delete  │ -p embed-certs-980842                                                                                                                                                                                                                                    │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:28 UTC │ 22 Dec 25 01:28 UTC │
	│ start   │ -p newest-cni-869293 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ newest-cni-869293            │ jenkins │ v1.37.0 │ 22 Dec 25 01:28 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-154186 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                  │ no-preload-154186            │ jenkins │ v1.37.0 │ 22 Dec 25 01:34 UTC │                     │
	│ stop    │ -p no-preload-154186 --alsologtostderr -v=3                                                                                                                                                                                                              │ no-preload-154186            │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │ 22 Dec 25 01:36 UTC │
	│ addons  │ enable dashboard -p no-preload-154186 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                             │ no-preload-154186            │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │ 22 Dec 25 01:36 UTC │
	│ start   │ -p no-preload-154186 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-154186            │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-869293 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                  │ newest-cni-869293            │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │                     │
	│ stop    │ -p newest-cni-869293 --alsologtostderr -v=3                                                                                                                                                                                                              │ newest-cni-869293            │ jenkins │ v1.37.0 │ 22 Dec 25 01:38 UTC │ 22 Dec 25 01:38 UTC │
	│ addons  │ enable dashboard -p newest-cni-869293 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                             │ newest-cni-869293            │ jenkins │ v1.37.0 │ 22 Dec 25 01:38 UTC │ 22 Dec 25 01:38 UTC │
	│ start   │ -p newest-cni-869293 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ newest-cni-869293            │ jenkins │ v1.37.0 │ 22 Dec 25 01:38 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───
──────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/22 01:38:31
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1222 01:38:31.686572 1685746 out.go:360] Setting OutFile to fd 1 ...
	I1222 01:38:31.686782 1685746 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:38:31.686816 1685746 out.go:374] Setting ErrFile to fd 2...
	I1222 01:38:31.686836 1685746 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:38:31.687133 1685746 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1395000/.minikube/bin
	I1222 01:38:31.687563 1685746 out.go:368] Setting JSON to false
	I1222 01:38:31.688584 1685746 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":116465,"bootTime":1766251047,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1222 01:38:31.688686 1685746 start.go:143] virtualization:  
	I1222 01:38:31.691576 1685746 out.go:179] * [newest-cni-869293] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1222 01:38:31.695464 1685746 out.go:179]   - MINIKUBE_LOCATION=22179
	I1222 01:38:31.695552 1685746 notify.go:221] Checking for updates...
	I1222 01:38:31.701535 1685746 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 01:38:31.704637 1685746 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-1395000/kubeconfig
	I1222 01:38:31.707560 1685746 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1395000/.minikube
	I1222 01:38:31.710534 1685746 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1222 01:38:31.713575 1685746 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 01:38:31.717166 1685746 config.go:182] Loaded profile config "newest-cni-869293": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1222 01:38:31.717762 1685746 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 01:38:31.753414 1685746 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1222 01:38:31.753539 1685746 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:38:31.812499 1685746 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 01:38:31.803096079 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:38:31.812613 1685746 docker.go:319] overlay module found
	I1222 01:38:31.815770 1685746 out.go:179] * Using the docker driver based on existing profile
	I1222 01:38:31.818545 1685746 start.go:309] selected driver: docker
	I1222 01:38:31.818566 1685746 start.go:928] validating driver "docker" against &{Name:newest-cni-869293 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-869293 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:38:31.818662 1685746 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 01:38:31.819384 1685746 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:38:31.880587 1685746 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 01:38:31.870819289 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:38:31.880955 1685746 start_flags.go:1014] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1222 01:38:31.880984 1685746 cni.go:84] Creating CNI manager for ""
	I1222 01:38:31.881038 1685746 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1222 01:38:31.881081 1685746 start.go:353] cluster config:
	{Name:newest-cni-869293 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-869293 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:38:31.884279 1685746 out.go:179] * Starting "newest-cni-869293" primary control-plane node in "newest-cni-869293" cluster
	I1222 01:38:31.887056 1685746 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1222 01:38:31.890043 1685746 out.go:179] * Pulling base image v0.0.48-1766219634-22260 ...
	I1222 01:38:31.892868 1685746 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1222 01:38:31.892919 1685746 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4
	I1222 01:38:31.892932 1685746 cache.go:65] Caching tarball of preloaded images
	I1222 01:38:31.892952 1685746 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1222 01:38:31.893022 1685746 preload.go:251] Found /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1222 01:38:31.893039 1685746 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on containerd
	I1222 01:38:31.893153 1685746 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/config.json ...
	I1222 01:38:31.913018 1685746 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon, skipping pull
	I1222 01:38:31.913041 1685746 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 exists in daemon, skipping load
	I1222 01:38:31.913060 1685746 cache.go:243] Successfully downloaded all kic artifacts
	I1222 01:38:31.913090 1685746 start.go:360] acquireMachinesLock for newest-cni-869293: {Name:mke3b6ee6da4cf7fd78c9e3f2e52f52ceafca24c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:38:31.913180 1685746 start.go:364] duration metric: took 44.275µs to acquireMachinesLock for "newest-cni-869293"
	I1222 01:38:31.913204 1685746 start.go:96] Skipping create...Using existing machine configuration
	I1222 01:38:31.913210 1685746 fix.go:54] fixHost starting: 
	I1222 01:38:31.913477 1685746 cli_runner.go:164] Run: docker container inspect newest-cni-869293 --format={{.State.Status}}
	I1222 01:38:31.930780 1685746 fix.go:112] recreateIfNeeded on newest-cni-869293: state=Stopped err=<nil>
	W1222 01:38:31.930815 1685746 fix.go:138] unexpected machine state, will restart: <nil>
	W1222 01:38:29.750532 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:38:32.248109 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:38:31.934050 1685746 out.go:252] * Restarting existing docker container for "newest-cni-869293" ...
	I1222 01:38:31.934152 1685746 cli_runner.go:164] Run: docker start newest-cni-869293
	I1222 01:38:32.204881 1685746 cli_runner.go:164] Run: docker container inspect newest-cni-869293 --format={{.State.Status}}
	I1222 01:38:32.243691 1685746 kic.go:430] container "newest-cni-869293" state is running.
	I1222 01:38:32.244096 1685746 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-869293
	I1222 01:38:32.265947 1685746 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/config.json ...
	I1222 01:38:32.266210 1685746 machine.go:94] provisionDockerMachine start ...
	I1222 01:38:32.266268 1685746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:38:32.293919 1685746 main.go:144] libmachine: Using SSH client type: native
	I1222 01:38:32.294281 1685746 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38707 <nil> <nil>}
	I1222 01:38:32.294292 1685746 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 01:38:32.294932 1685746 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54476->127.0.0.1:38707: read: connection reset by peer
	I1222 01:38:35.433786 1685746 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-869293
	
	I1222 01:38:35.433813 1685746 ubuntu.go:182] provisioning hostname "newest-cni-869293"
	I1222 01:38:35.433886 1685746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:38:35.451516 1685746 main.go:144] libmachine: Using SSH client type: native
	I1222 01:38:35.451830 1685746 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38707 <nil> <nil>}
	I1222 01:38:35.451848 1685746 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-869293 && echo "newest-cni-869293" | sudo tee /etc/hostname
	I1222 01:38:35.591409 1685746 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-869293
	
	I1222 01:38:35.591519 1685746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:38:35.609341 1685746 main.go:144] libmachine: Using SSH client type: native
	I1222 01:38:35.609647 1685746 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38707 <nil> <nil>}
	I1222 01:38:35.609670 1685746 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-869293' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-869293/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-869293' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1222 01:38:35.742798 1685746 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 01:38:35.742824 1685746 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22179-1395000/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-1395000/.minikube}
	I1222 01:38:35.742864 1685746 ubuntu.go:190] setting up certificates
	I1222 01:38:35.742881 1685746 provision.go:84] configureAuth start
	I1222 01:38:35.742942 1685746 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-869293
	I1222 01:38:35.763152 1685746 provision.go:143] copyHostCerts
	I1222 01:38:35.763214 1685746 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.pem, removing ...
	I1222 01:38:35.763230 1685746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.pem
	I1222 01:38:35.763306 1685746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.pem (1082 bytes)
	I1222 01:38:35.763401 1685746 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1395000/.minikube/cert.pem, removing ...
	I1222 01:38:35.763407 1685746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1395000/.minikube/cert.pem
	I1222 01:38:35.763431 1685746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-1395000/.minikube/cert.pem (1123 bytes)
	I1222 01:38:35.763483 1685746 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1395000/.minikube/key.pem, removing ...
	I1222 01:38:35.763490 1685746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1395000/.minikube/key.pem
	I1222 01:38:35.763514 1685746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-1395000/.minikube/key.pem (1679 bytes)
	I1222 01:38:35.763557 1685746 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca-key.pem org=jenkins.newest-cni-869293 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-869293]
	I1222 01:38:35.889485 1685746 provision.go:177] copyRemoteCerts
	I1222 01:38:35.889557 1685746 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1222 01:38:35.889605 1685746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:38:35.914143 1685746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38707 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/newest-cni-869293/id_rsa Username:docker}
	I1222 01:38:36.016150 1685746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1222 01:38:36.035930 1685746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1222 01:38:36.054716 1685746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1222 01:38:36.072586 1685746 provision.go:87] duration metric: took 329.680992ms to configureAuth
	I1222 01:38:36.072618 1685746 ubuntu.go:206] setting minikube options for container-runtime
	I1222 01:38:36.072830 1685746 config.go:182] Loaded profile config "newest-cni-869293": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1222 01:38:36.072842 1685746 machine.go:97] duration metric: took 3.806623107s to provisionDockerMachine
	I1222 01:38:36.072850 1685746 start.go:293] postStartSetup for "newest-cni-869293" (driver="docker")
	I1222 01:38:36.072866 1685746 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1222 01:38:36.072926 1685746 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1222 01:38:36.072980 1685746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:38:36.090324 1685746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38707 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/newest-cni-869293/id_rsa Username:docker}
	I1222 01:38:36.187013 1685746 ssh_runner.go:195] Run: cat /etc/os-release
	I1222 01:38:36.191029 1685746 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1222 01:38:36.191111 1685746 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1222 01:38:36.191134 1685746 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1395000/.minikube/addons for local assets ...
	I1222 01:38:36.191215 1685746 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1395000/.minikube/files for local assets ...
	I1222 01:38:36.191355 1685746 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem -> 13968642.pem in /etc/ssl/certs
	I1222 01:38:36.191477 1685746 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1222 01:38:36.200008 1685746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem --> /etc/ssl/certs/13968642.pem (1708 bytes)
	I1222 01:38:36.219292 1685746 start.go:296] duration metric: took 146.420744ms for postStartSetup
	I1222 01:38:36.219381 1685746 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 01:38:36.219430 1685746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:38:36.237412 1685746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38707 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/newest-cni-869293/id_rsa Username:docker}
	I1222 01:38:36.336664 1685746 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1222 01:38:36.342619 1685746 fix.go:56] duration metric: took 4.429400761s for fixHost
	I1222 01:38:36.342646 1685746 start.go:83] releasing machines lock for "newest-cni-869293", held for 4.429452897s
	I1222 01:38:36.342750 1685746 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-869293
	I1222 01:38:36.362211 1685746 ssh_runner.go:195] Run: cat /version.json
	I1222 01:38:36.362264 1685746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:38:36.362344 1685746 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1222 01:38:36.362407 1685746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:38:36.385216 1685746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38707 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/newest-cni-869293/id_rsa Username:docker}
	I1222 01:38:36.393122 1685746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38707 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/newest-cni-869293/id_rsa Username:docker}
	I1222 01:38:36.571819 1685746 ssh_runner.go:195] Run: systemctl --version
	I1222 01:38:36.578591 1685746 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1222 01:38:36.583121 1685746 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1222 01:38:36.583193 1685746 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1222 01:38:36.591539 1685746 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1222 01:38:36.591564 1685746 start.go:496] detecting cgroup driver to use...
	I1222 01:38:36.591620 1685746 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 01:38:36.591689 1685746 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1222 01:38:36.609980 1685746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1222 01:38:36.623763 1685746 docker.go:218] disabling cri-docker service (if available) ...
	I1222 01:38:36.623883 1685746 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1222 01:38:36.639236 1685746 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1222 01:38:36.652937 1685746 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1222 01:38:36.763224 1685746 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1222 01:38:36.883204 1685746 docker.go:234] disabling docker service ...
	I1222 01:38:36.883275 1685746 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1222 01:38:36.898372 1685746 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1222 01:38:36.911453 1685746 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1222 01:38:37.034252 1685746 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1222 01:38:37.157335 1685746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1222 01:38:37.170564 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 01:38:37.185195 1685746 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1222 01:38:37.194710 1685746 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1222 01:38:37.204647 1685746 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1222 01:38:37.204731 1685746 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1222 01:38:37.214808 1685746 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1222 01:38:37.223830 1685746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1222 01:38:37.232600 1685746 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1222 01:38:37.242680 1685746 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1222 01:38:37.254369 1685746 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1222 01:38:37.265094 1685746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1222 01:38:37.278711 1685746 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1222 01:38:37.288297 1685746 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1222 01:38:37.299386 1685746 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1222 01:38:37.306803 1685746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:38:37.412668 1685746 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1222 01:38:37.531042 1685746 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1222 01:38:37.531187 1685746 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1222 01:38:37.535291 1685746 start.go:564] Will wait 60s for crictl version
	I1222 01:38:37.535398 1685746 ssh_runner.go:195] Run: which crictl
	I1222 01:38:37.539239 1685746 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1222 01:38:37.568186 1685746 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.1
	RuntimeApiVersion:  v1
	I1222 01:38:37.568329 1685746 ssh_runner.go:195] Run: containerd --version
	I1222 01:38:37.589324 1685746 ssh_runner.go:195] Run: containerd --version
	I1222 01:38:37.614497 1685746 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.1 ...
	I1222 01:38:37.617592 1685746 cli_runner.go:164] Run: docker network inspect newest-cni-869293 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 01:38:37.633737 1685746 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1222 01:38:37.637631 1685746 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 01:38:37.650774 1685746 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1222 01:38:34.249047 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:38:36.748953 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:38:37.653725 1685746 kubeadm.go:884] updating cluster {Name:newest-cni-869293 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-869293 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1222 01:38:37.653882 1685746 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1222 01:38:37.653965 1685746 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 01:38:37.679481 1685746 containerd.go:627] all images are preloaded for containerd runtime.
	I1222 01:38:37.679507 1685746 containerd.go:534] Images already preloaded, skipping extraction
	I1222 01:38:37.679567 1685746 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 01:38:37.707944 1685746 containerd.go:627] all images are preloaded for containerd runtime.
	I1222 01:38:37.707969 1685746 cache_images.go:86] Images are preloaded, skipping loading
	I1222 01:38:37.707979 1685746 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-rc.1 containerd true true} ...
	I1222 01:38:37.708083 1685746 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-869293 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-869293 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1222 01:38:37.708165 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
	I1222 01:38:37.740577 1685746 cni.go:84] Creating CNI manager for ""
	I1222 01:38:37.740600 1685746 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1222 01:38:37.740621 1685746 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1222 01:38:37.740645 1685746 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-869293 NodeName:newest-cni-869293 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1222 01:38:37.740759 1685746 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-869293"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1222 01:38:37.740831 1685746 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1222 01:38:37.749395 1685746 binaries.go:51] Found k8s binaries, skipping transfer
	I1222 01:38:37.749470 1685746 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1222 01:38:37.757587 1685746 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1222 01:38:37.770794 1685746 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1222 01:38:37.784049 1685746 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2233 bytes)
	I1222 01:38:37.797792 1685746 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1222 01:38:37.801552 1685746 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 01:38:37.811598 1685746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:38:37.940636 1685746 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 01:38:37.962625 1685746 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293 for IP: 192.168.76.2
	I1222 01:38:37.962649 1685746 certs.go:195] generating shared ca certs ...
	I1222 01:38:37.962682 1685746 certs.go:227] acquiring lock for ca certs: {Name:mk4e1172e73c8d9b926824a39d7e920772302ed7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:38:37.962837 1685746 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.key
	I1222 01:38:37.962900 1685746 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.key
	I1222 01:38:37.962912 1685746 certs.go:257] generating profile certs ...
	I1222 01:38:37.963014 1685746 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/client.key
	I1222 01:38:37.963084 1685746 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/apiserver.key.70db33ce
	I1222 01:38:37.963128 1685746 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/proxy-client.key
	I1222 01:38:37.963238 1685746 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/1396864.pem (1338 bytes)
	W1222 01:38:37.963276 1685746 certs.go:480] ignoring /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/1396864_empty.pem, impossibly tiny 0 bytes
	I1222 01:38:37.963287 1685746 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca-key.pem (1675 bytes)
	I1222 01:38:37.963316 1685746 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem (1082 bytes)
	I1222 01:38:37.963343 1685746 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem (1123 bytes)
	I1222 01:38:37.963379 1685746 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/key.pem (1679 bytes)
	I1222 01:38:37.963434 1685746 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem (1708 bytes)
	I1222 01:38:37.964596 1685746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1222 01:38:37.999913 1685746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1222 01:38:38.025465 1685746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1222 01:38:38.053443 1685746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1222 01:38:38.087732 1685746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1222 01:38:38.107200 1685746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1222 01:38:38.125482 1685746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1222 01:38:38.143284 1685746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1222 01:38:38.161557 1685746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem --> /usr/share/ca-certificates/13968642.pem (1708 bytes)
	I1222 01:38:38.180124 1685746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1222 01:38:38.198446 1685746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/1396864.pem --> /usr/share/ca-certificates/1396864.pem (1338 bytes)
	I1222 01:38:38.215766 1685746 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1222 01:38:38.228774 1685746 ssh_runner.go:195] Run: openssl version
	I1222 01:38:38.235631 1685746 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/13968642.pem
	I1222 01:38:38.244039 1685746 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/13968642.pem /etc/ssl/certs/13968642.pem
	I1222 01:38:38.252123 1685746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13968642.pem
	I1222 01:38:38.256169 1685746 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 00:13 /usr/share/ca-certificates/13968642.pem
	I1222 01:38:38.256240 1685746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13968642.pem
	I1222 01:38:38.297738 1685746 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1222 01:38:38.305673 1685746 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:38:38.313250 1685746 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1222 01:38:38.321143 1685746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:38:38.325161 1685746 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 00:04 /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:38:38.325259 1685746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:38:38.366760 1685746 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1222 01:38:38.375589 1685746 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1396864.pem
	I1222 01:38:38.383142 1685746 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1396864.pem /etc/ssl/certs/1396864.pem
	I1222 01:38:38.391262 1685746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1396864.pem
	I1222 01:38:38.395405 1685746 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 00:13 /usr/share/ca-certificates/1396864.pem
	I1222 01:38:38.395474 1685746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1396864.pem
	I1222 01:38:38.436708 1685746 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1222 01:38:38.444445 1685746 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 01:38:38.448390 1685746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1222 01:38:38.489618 1685746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1222 01:38:38.530725 1685746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1222 01:38:38.571636 1685746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1222 01:38:38.612592 1685746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1222 01:38:38.653872 1685746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1222 01:38:38.695135 1685746 kubeadm.go:401] StartCluster: {Name:newest-cni-869293 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-869293 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.
L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:38:38.695236 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1222 01:38:38.695304 1685746 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 01:38:38.730406 1685746 cri.go:96] found id: ""
	I1222 01:38:38.730480 1685746 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1222 01:38:38.742929 1685746 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1222 01:38:38.742952 1685746 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1222 01:38:38.743012 1685746 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1222 01:38:38.765617 1685746 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1222 01:38:38.766245 1685746 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-869293" does not appear in /home/jenkins/minikube-integration/22179-1395000/kubeconfig
	I1222 01:38:38.766510 1685746 kubeconfig.go:62] /home/jenkins/minikube-integration/22179-1395000/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-869293" cluster setting kubeconfig missing "newest-cni-869293" context setting]
	I1222 01:38:38.766957 1685746 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/kubeconfig: {Name:mk08a9ce05fb3d2aec66c2d4776a38c1f64c42df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:38:38.768687 1685746 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1222 01:38:38.776658 1685746 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1222 01:38:38.776695 1685746 kubeadm.go:602] duration metric: took 33.737033ms to restartPrimaryControlPlane
	I1222 01:38:38.776705 1685746 kubeadm.go:403] duration metric: took 81.581475ms to StartCluster
	I1222 01:38:38.776720 1685746 settings.go:142] acquiring lock: {Name:mk10fbc51e5384d725fd46ab36048358281cb87b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:38:38.776793 1685746 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22179-1395000/kubeconfig
	I1222 01:38:38.777670 1685746 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/kubeconfig: {Name:mk08a9ce05fb3d2aec66c2d4776a38c1f64c42df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:38:38.777888 1685746 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1222 01:38:38.778285 1685746 config.go:182] Loaded profile config "newest-cni-869293": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1222 01:38:38.778259 1685746 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1222 01:38:38.778393 1685746 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-869293"
	I1222 01:38:38.778408 1685746 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-869293"
	I1222 01:38:38.778433 1685746 host.go:66] Checking if "newest-cni-869293" exists ...
	I1222 01:38:38.778917 1685746 cli_runner.go:164] Run: docker container inspect newest-cni-869293 --format={{.State.Status}}
	I1222 01:38:38.779098 1685746 addons.go:70] Setting dashboard=true in profile "newest-cni-869293"
	I1222 01:38:38.779126 1685746 addons.go:239] Setting addon dashboard=true in "newest-cni-869293"
	W1222 01:38:38.779211 1685746 addons.go:248] addon dashboard should already be in state true
	I1222 01:38:38.779264 1685746 host.go:66] Checking if "newest-cni-869293" exists ...
	I1222 01:38:38.779355 1685746 addons.go:70] Setting default-storageclass=true in profile "newest-cni-869293"
	I1222 01:38:38.779382 1685746 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-869293"
	I1222 01:38:38.779657 1685746 cli_runner.go:164] Run: docker container inspect newest-cni-869293 --format={{.State.Status}}
	I1222 01:38:38.780717 1685746 cli_runner.go:164] Run: docker container inspect newest-cni-869293 --format={{.State.Status}}
	I1222 01:38:38.783183 1685746 out.go:179] * Verifying Kubernetes components...
	I1222 01:38:38.795835 1685746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:38:38.839727 1685746 addons.go:239] Setting addon default-storageclass=true in "newest-cni-869293"
	I1222 01:38:38.839773 1685746 host.go:66] Checking if "newest-cni-869293" exists ...
	I1222 01:38:38.844706 1685746 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1222 01:38:38.844788 1685746 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1222 01:38:38.845056 1685746 cli_runner.go:164] Run: docker container inspect newest-cni-869293 --format={{.State.Status}}
	I1222 01:38:38.847706 1685746 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 01:38:38.847732 1685746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1222 01:38:38.847798 1685746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:38:38.850623 1685746 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1222 01:38:38.856243 1685746 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1222 01:38:38.856273 1685746 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1222 01:38:38.856351 1685746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:38:38.873943 1685746 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1222 01:38:38.873976 1685746 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1222 01:38:38.874046 1685746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:38:38.897069 1685746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38707 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/newest-cni-869293/id_rsa Username:docker}
	I1222 01:38:38.917887 1685746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38707 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/newest-cni-869293/id_rsa Username:docker}
	I1222 01:38:38.925239 1685746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38707 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/newest-cni-869293/id_rsa Username:docker}
	I1222 01:38:39.040289 1685746 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 01:38:39.062591 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 01:38:39.071403 1685746 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1222 01:38:39.071429 1685746 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1222 01:38:39.085714 1685746 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1222 01:38:39.085742 1685746 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1222 01:38:39.113564 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1222 01:38:39.117642 1685746 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1222 01:38:39.117668 1685746 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1222 01:38:39.160317 1685746 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1222 01:38:39.160342 1685746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1222 01:38:39.179666 1685746 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1222 01:38:39.179693 1685746 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1222 01:38:39.195940 1685746 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1222 01:38:39.195967 1685746 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1222 01:38:39.211128 1685746 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1222 01:38:39.211152 1685746 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1222 01:38:39.229341 1685746 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1222 01:38:39.229367 1685746 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1222 01:38:39.242863 1685746 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1222 01:38:39.242891 1685746 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1222 01:38:39.257396 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1222 01:38:39.740898 1685746 api_server.go:52] waiting for apiserver process to appear ...
	I1222 01:38:39.740996 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:38:39.741091 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:38:39.741148 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:39.741150 1685746 retry.go:84] will retry after 300ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:38:39.741362 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:39.924082 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:38:39.987453 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:40.012530 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 01:38:40.076254 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:38:40.106299 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:38:40.156991 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:40.241110 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:40.291973 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1222 01:38:40.350617 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:38:40.361182 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:40.389531 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:38:40.437774 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:38:40.465333 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:40.692837 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1222 01:38:40.741460 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:38:40.766384 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:40.961925 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 01:38:40.997418 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:38:41.047996 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:38:41.103696 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:41.241962 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:41.674831 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:38:39.248045 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:38:41.248244 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:38:41.741299 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:38:41.744404 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:42.118142 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:38:42.189177 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:42.241414 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:42.263947 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:38:42.333305 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:42.741698 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:43.241589 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:43.265699 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:38:43.338843 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:43.509282 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 01:38:43.559893 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:38:43.581660 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:38:43.623026 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:43.741112 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:44.241130 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:44.741229 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:44.931703 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:38:45.008485 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:45.244431 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:45.741178 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:45.765524 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:38:45.843868 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:45.977122 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:38:46.040374 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:46.241453 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:46.486248 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:38:46.559168 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:38:43.248311 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:38:45.249134 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:38:47.748896 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:38:46.741869 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:47.241095 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:47.741431 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:48.241112 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:48.294921 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:38:48.361284 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:48.741773 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:48.852570 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:38:48.911873 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:49.241377 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:49.368148 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:38:49.429800 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:49.741220 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:50.241219 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:50.741547 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:51.241159 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:38:49.748932 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:38:52.248838 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:38:51.741774 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:52.241901 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:52.391494 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:38:52.452597 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:52.452636 1685746 retry.go:84] will retry after 11.6s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:52.508552 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:38:52.579056 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:52.741603 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:53.241037 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:53.297681 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:38:53.358617 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:53.741128 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:54.241259 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:54.741444 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:55.241131 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:55.741185 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:56.241903 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:38:54.748014 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:38:56.748733 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:38:56.742022 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:56.871217 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:38:56.931377 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:56.931421 1685746 retry.go:84] will retry after 12.9s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:57.241904 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:57.741132 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:58.241082 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:58.741129 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:59.241514 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:59.741571 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:00.241104 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:00.342627 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:39:00.433191 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:39:00.433235 1685746 retry.go:84] will retry after 8.1s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:39:00.741833 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:01.241455 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:38:59.248212 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:39:01.248492 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:39:01.741502 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:02.241599 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:02.741070 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:03.241152 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:03.741076 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:04.041996 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:39:04.111760 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:39:04.111812 1685746 retry.go:84] will retry after 10s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:39:04.242089 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:04.741350 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:05.241736 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:05.741098 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:06.241279 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:39:03.747982 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:39:05.748583 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:39:07.748998 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:39:06.742311 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:07.241927 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:07.741133 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:08.241157 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:08.532510 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:39:08.603273 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:39:08.603314 1685746 retry.go:84] will retry after 7.8s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:39:08.741625 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:09.241616 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:09.741180 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:09.845450 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:39:09.907468 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:39:10.242040 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:10.742004 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:11.242043 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:39:10.248934 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:39:12.748076 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:39:11.741028 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:12.241114 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:12.741779 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:13.241398 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:13.741757 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:14.084932 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:39:14.149870 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:39:14.149915 1685746 retry.go:84] will retry after 13.2s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:39:14.241288 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:14.742009 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:15.241500 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:15.741459 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:16.241659 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:16.395227 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:39:16.456949 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:39:14.748959 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:39:17.248674 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:39:16.741507 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:17.241459 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:17.741042 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:18.241111 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:18.741162 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:19.241875 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:19.741715 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:20.241732 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:20.741105 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:21.241347 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:39:19.748622 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:39:21.749035 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:39:21.741639 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:22.241911 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:22.742051 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:23.241970 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:23.741127 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:24.241560 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:24.741692 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:25.241106 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:25.741122 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:26.241137 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:39:24.248544 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:39:26.747990 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:39:26.741585 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:27.241155 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:27.301256 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:39:27.375517 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:39:27.375598 1685746 retry.go:84] will retry after 26.5s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:39:27.741076 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:28.241034 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:28.741642 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:29.226555 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 01:39:29.242011 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:39:29.291422 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:39:29.741622 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:30.245888 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:30.741105 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:31.241550 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:39:28.748186 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:39:30.748280 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:39:32.748600 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:39:31.741066 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:32.241183 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:32.741695 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:33.241134 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:33.741807 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:34.241685 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:34.741125 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:35.241915 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:35.741241 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:36.241639 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:39:35.249008 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:39:37.748582 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:39:36.741652 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:37.241141 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:37.741891 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:38.054310 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:39:38.118505 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:39:38.118547 1685746 retry.go:84] will retry after 47.2s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:39:38.241764 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:38.741459 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:39.241609 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:39:39.241696 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:39:39.269891 1685746 cri.go:96] found id: ""
	I1222 01:39:39.269914 1685746 logs.go:282] 0 containers: []
	W1222 01:39:39.269923 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:39:39.269930 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:39:39.269991 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:39:39.300389 1685746 cri.go:96] found id: ""
	I1222 01:39:39.300414 1685746 logs.go:282] 0 containers: []
	W1222 01:39:39.300423 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:39:39.300430 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:39:39.300501 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:39:39.326557 1685746 cri.go:96] found id: ""
	I1222 01:39:39.326582 1685746 logs.go:282] 0 containers: []
	W1222 01:39:39.326592 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:39:39.326598 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:39:39.326697 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:39:39.354049 1685746 cri.go:96] found id: ""
	I1222 01:39:39.354115 1685746 logs.go:282] 0 containers: []
	W1222 01:39:39.354125 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:39:39.354132 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:39:39.354202 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:39:39.380457 1685746 cri.go:96] found id: ""
	I1222 01:39:39.380490 1685746 logs.go:282] 0 containers: []
	W1222 01:39:39.380500 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:39:39.380507 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:39:39.380577 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:39:39.407039 1685746 cri.go:96] found id: ""
	I1222 01:39:39.407062 1685746 logs.go:282] 0 containers: []
	W1222 01:39:39.407070 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:39:39.407076 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:39:39.407139 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:39:39.431541 1685746 cri.go:96] found id: ""
	I1222 01:39:39.431568 1685746 logs.go:282] 0 containers: []
	W1222 01:39:39.431577 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:39:39.431584 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:39:39.431676 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:39:39.457555 1685746 cri.go:96] found id: ""
	I1222 01:39:39.457588 1685746 logs.go:282] 0 containers: []
	W1222 01:39:39.457607 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:39:39.457616 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:39:39.457629 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:39:39.517907 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:39:39.517997 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:39:39.534348 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:39:39.534373 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:39:39.607407 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:39:39.598204    1848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:39.599085    1848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:39.600659    1848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:39.601166    1848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:39.602826    1848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:39:39.598204    1848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:39.599085    1848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:39.600659    1848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:39.601166    1848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:39.602826    1848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:39:39.607438 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:39:39.607463 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:39:39.634050 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:39:39.634094 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 01:39:40.248054 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:39:42.748083 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:39:42.163786 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:42.176868 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:39:42.176959 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:39:42.208642 1685746 cri.go:96] found id: ""
	I1222 01:39:42.208672 1685746 logs.go:282] 0 containers: []
	W1222 01:39:42.208682 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:39:42.208688 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:39:42.208757 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:39:42.249523 1685746 cri.go:96] found id: ""
	I1222 01:39:42.249552 1685746 logs.go:282] 0 containers: []
	W1222 01:39:42.249562 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:39:42.249569 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:39:42.249641 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:39:42.283515 1685746 cri.go:96] found id: ""
	I1222 01:39:42.283542 1685746 logs.go:282] 0 containers: []
	W1222 01:39:42.283550 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:39:42.283557 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:39:42.283659 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:39:42.312237 1685746 cri.go:96] found id: ""
	I1222 01:39:42.312260 1685746 logs.go:282] 0 containers: []
	W1222 01:39:42.312269 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:39:42.312276 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:39:42.312335 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:39:42.341269 1685746 cri.go:96] found id: ""
	I1222 01:39:42.341297 1685746 logs.go:282] 0 containers: []
	W1222 01:39:42.341306 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:39:42.341312 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:39:42.341374 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:39:42.367696 1685746 cri.go:96] found id: ""
	I1222 01:39:42.367723 1685746 logs.go:282] 0 containers: []
	W1222 01:39:42.367732 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:39:42.367739 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:39:42.367804 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:39:42.396577 1685746 cri.go:96] found id: ""
	I1222 01:39:42.396602 1685746 logs.go:282] 0 containers: []
	W1222 01:39:42.396612 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:39:42.396618 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:39:42.396689 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:39:42.426348 1685746 cri.go:96] found id: ""
	I1222 01:39:42.426380 1685746 logs.go:282] 0 containers: []
	W1222 01:39:42.426392 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:39:42.426413 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:39:42.426433 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:39:42.481969 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:39:42.482005 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:39:42.499357 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:39:42.499436 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:39:42.576627 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:39:42.568338    1966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:42.569235    1966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:42.570839    1966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:42.571221    1966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:42.572936    1966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:39:42.568338    1966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:42.569235    1966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:42.570839    1966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:42.571221    1966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:42.572936    1966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:39:42.576649 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:39:42.576663 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:39:42.601751 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:39:42.601784 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:39:45.131239 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:45.157288 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:39:45.157379 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:39:45.207917 1685746 cri.go:96] found id: ""
	I1222 01:39:45.207953 1685746 logs.go:282] 0 containers: []
	W1222 01:39:45.207963 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:39:45.207975 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:39:45.208042 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:39:45.255413 1685746 cri.go:96] found id: ""
	I1222 01:39:45.255448 1685746 logs.go:282] 0 containers: []
	W1222 01:39:45.255459 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:39:45.255467 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:39:45.255564 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:39:45.300163 1685746 cri.go:96] found id: ""
	I1222 01:39:45.300196 1685746 logs.go:282] 0 containers: []
	W1222 01:39:45.300206 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:39:45.300214 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:39:45.300285 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:39:45.348918 1685746 cri.go:96] found id: ""
	I1222 01:39:45.348943 1685746 logs.go:282] 0 containers: []
	W1222 01:39:45.348952 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:39:45.348959 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:39:45.349022 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:39:45.379477 1685746 cri.go:96] found id: ""
	I1222 01:39:45.379502 1685746 logs.go:282] 0 containers: []
	W1222 01:39:45.379512 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:39:45.379518 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:39:45.379580 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:39:45.410514 1685746 cri.go:96] found id: ""
	I1222 01:39:45.410535 1685746 logs.go:282] 0 containers: []
	W1222 01:39:45.410543 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:39:45.410550 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:39:45.410611 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:39:45.436661 1685746 cri.go:96] found id: ""
	I1222 01:39:45.436686 1685746 logs.go:282] 0 containers: []
	W1222 01:39:45.436695 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:39:45.436702 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:39:45.436769 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:39:45.466972 1685746 cri.go:96] found id: ""
	I1222 01:39:45.467001 1685746 logs.go:282] 0 containers: []
	W1222 01:39:45.467010 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:39:45.467019 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:39:45.467032 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:39:45.567688 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:39:45.558837    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:45.559539    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:45.561202    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:45.561740    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:45.563191    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:39:45.558837    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:45.559539    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:45.561202    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:45.561740    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:45.563191    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:39:45.567712 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:39:45.567731 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:39:45.593712 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:39:45.593757 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:39:45.626150 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:39:45.626179 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:39:45.681273 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:39:45.681310 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1222 01:39:44.748908 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:39:47.248090 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:39:48.196684 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:48.207640 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:39:48.207718 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:39:48.232650 1685746 cri.go:96] found id: ""
	I1222 01:39:48.232680 1685746 logs.go:282] 0 containers: []
	W1222 01:39:48.232688 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:39:48.232708 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:39:48.232772 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:39:48.264801 1685746 cri.go:96] found id: ""
	I1222 01:39:48.264831 1685746 logs.go:282] 0 containers: []
	W1222 01:39:48.264841 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:39:48.264848 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:39:48.264915 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:39:48.300270 1685746 cri.go:96] found id: ""
	I1222 01:39:48.300300 1685746 logs.go:282] 0 containers: []
	W1222 01:39:48.300310 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:39:48.300317 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:39:48.300388 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:39:48.334711 1685746 cri.go:96] found id: ""
	I1222 01:39:48.334782 1685746 logs.go:282] 0 containers: []
	W1222 01:39:48.334806 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:39:48.334821 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:39:48.334898 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:39:48.359955 1685746 cri.go:96] found id: ""
	I1222 01:39:48.360023 1685746 logs.go:282] 0 containers: []
	W1222 01:39:48.360038 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:39:48.360052 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:39:48.360124 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:39:48.386551 1685746 cri.go:96] found id: ""
	I1222 01:39:48.386574 1685746 logs.go:282] 0 containers: []
	W1222 01:39:48.386583 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:39:48.386589 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:39:48.386648 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:39:48.412026 1685746 cri.go:96] found id: ""
	I1222 01:39:48.412052 1685746 logs.go:282] 0 containers: []
	W1222 01:39:48.412062 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:39:48.412069 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:39:48.412129 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:39:48.440847 1685746 cri.go:96] found id: ""
	I1222 01:39:48.440870 1685746 logs.go:282] 0 containers: []
	W1222 01:39:48.440878 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:39:48.440887 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:39:48.440897 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:39:48.496591 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:39:48.496673 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:39:48.512755 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:39:48.512834 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:39:48.596174 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:39:48.587854    2187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:48.588773    2187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:48.590542    2187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:48.590879    2187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:48.592406    2187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:39:48.587854    2187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:48.588773    2187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:48.590542    2187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:48.590879    2187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:48.592406    2187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:39:48.596249 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:39:48.596281 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:39:48.621362 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:39:48.621397 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:39:51.155431 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:51.169542 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:39:51.169616 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:39:51.195476 1685746 cri.go:96] found id: ""
	I1222 01:39:51.195500 1685746 logs.go:282] 0 containers: []
	W1222 01:39:51.195509 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:39:51.195516 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:39:51.195585 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:39:51.220215 1685746 cri.go:96] found id: ""
	I1222 01:39:51.220240 1685746 logs.go:282] 0 containers: []
	W1222 01:39:51.220249 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:39:51.220255 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:39:51.220324 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:39:51.248478 1685746 cri.go:96] found id: ""
	I1222 01:39:51.248508 1685746 logs.go:282] 0 containers: []
	W1222 01:39:51.248527 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:39:51.248534 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:39:51.248594 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:39:51.282587 1685746 cri.go:96] found id: ""
	I1222 01:39:51.282615 1685746 logs.go:282] 0 containers: []
	W1222 01:39:51.282624 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:39:51.282630 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:39:51.282691 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:39:51.310999 1685746 cri.go:96] found id: ""
	I1222 01:39:51.311029 1685746 logs.go:282] 0 containers: []
	W1222 01:39:51.311038 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:39:51.311044 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:39:51.311105 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:39:51.338337 1685746 cri.go:96] found id: ""
	I1222 01:39:51.338414 1685746 logs.go:282] 0 containers: []
	W1222 01:39:51.338431 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:39:51.338438 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:39:51.338517 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:39:51.365554 1685746 cri.go:96] found id: ""
	I1222 01:39:51.365582 1685746 logs.go:282] 0 containers: []
	W1222 01:39:51.365591 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:39:51.365598 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:39:51.365656 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:39:51.389874 1685746 cri.go:96] found id: ""
	I1222 01:39:51.389903 1685746 logs.go:282] 0 containers: []
	W1222 01:39:51.389913 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:39:51.389922 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:39:51.389933 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:39:51.449732 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:39:51.449797 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:39:51.467573 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:39:51.467669 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:39:51.568437 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:39:51.561021    2299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:51.561697    2299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:51.563323    2299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:51.563918    2299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:51.564974    2299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:39:51.561021    2299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:51.561697    2299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:51.563323    2299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:51.563918    2299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:51.564974    2299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:39:51.568512 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:39:51.568561 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:39:51.595758 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:39:51.595841 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 01:39:49.249046 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:39:51.748032 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:39:53.905270 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:39:53.968241 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:39:53.968406 1685746 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1222 01:39:54.129563 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:54.143910 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:39:54.144012 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:39:54.169973 1685746 cri.go:96] found id: ""
	I1222 01:39:54.170009 1685746 logs.go:282] 0 containers: []
	W1222 01:39:54.170018 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:39:54.170042 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:39:54.170158 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:39:54.198811 1685746 cri.go:96] found id: ""
	I1222 01:39:54.198838 1685746 logs.go:282] 0 containers: []
	W1222 01:39:54.198847 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:39:54.198854 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:39:54.198917 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:39:54.224425 1685746 cri.go:96] found id: ""
	I1222 01:39:54.224452 1685746 logs.go:282] 0 containers: []
	W1222 01:39:54.224462 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:39:54.224468 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:39:54.224549 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:39:54.273957 1685746 cri.go:96] found id: ""
	I1222 01:39:54.273983 1685746 logs.go:282] 0 containers: []
	W1222 01:39:54.273992 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:39:54.273998 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:39:54.274059 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:39:54.306801 1685746 cri.go:96] found id: ""
	I1222 01:39:54.306826 1685746 logs.go:282] 0 containers: []
	W1222 01:39:54.306836 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:39:54.306842 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:39:54.306916 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:39:54.339513 1685746 cri.go:96] found id: ""
	I1222 01:39:54.339539 1685746 logs.go:282] 0 containers: []
	W1222 01:39:54.339548 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:39:54.339555 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:39:54.339617 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:39:54.365259 1685746 cri.go:96] found id: ""
	I1222 01:39:54.365285 1685746 logs.go:282] 0 containers: []
	W1222 01:39:54.365295 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:39:54.365301 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:39:54.365363 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:39:54.390271 1685746 cri.go:96] found id: ""
	I1222 01:39:54.390294 1685746 logs.go:282] 0 containers: []
	W1222 01:39:54.390303 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:39:54.390312 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:39:54.390324 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:39:54.445696 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:39:54.445728 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:39:54.460676 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:39:54.460751 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:39:54.537038 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:39:54.528481    2418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:54.529359    2418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:54.531148    2418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:54.531443    2418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:54.533489    2418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:39:54.528481    2418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:54.529359    2418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:54.531148    2418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:54.531443    2418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:54.533489    2418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:39:54.537060 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:39:54.537075 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:39:54.566201 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:39:54.566234 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 01:39:53.749035 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:39:56.248725 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:39:57.093953 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:57.104681 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:39:57.104755 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:39:57.132428 1685746 cri.go:96] found id: ""
	I1222 01:39:57.132455 1685746 logs.go:282] 0 containers: []
	W1222 01:39:57.132465 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:39:57.132472 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:39:57.132532 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:39:57.158487 1685746 cri.go:96] found id: ""
	I1222 01:39:57.158512 1685746 logs.go:282] 0 containers: []
	W1222 01:39:57.158521 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:39:57.158528 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:39:57.158589 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:39:57.184175 1685746 cri.go:96] found id: ""
	I1222 01:39:57.184203 1685746 logs.go:282] 0 containers: []
	W1222 01:39:57.184213 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:39:57.184219 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:39:57.184279 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:39:57.215724 1685746 cri.go:96] found id: ""
	I1222 01:39:57.215752 1685746 logs.go:282] 0 containers: []
	W1222 01:39:57.215761 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:39:57.215768 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:39:57.215830 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:39:57.252375 1685746 cri.go:96] found id: ""
	I1222 01:39:57.252408 1685746 logs.go:282] 0 containers: []
	W1222 01:39:57.252420 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:39:57.252427 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:39:57.252499 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:39:57.291286 1685746 cri.go:96] found id: ""
	I1222 01:39:57.291323 1685746 logs.go:282] 0 containers: []
	W1222 01:39:57.291333 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:39:57.291344 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:39:57.291408 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:39:57.322496 1685746 cri.go:96] found id: ""
	I1222 01:39:57.322577 1685746 logs.go:282] 0 containers: []
	W1222 01:39:57.322594 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:39:57.322602 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:39:57.322678 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:39:57.352695 1685746 cri.go:96] found id: ""
	I1222 01:39:57.352722 1685746 logs.go:282] 0 containers: []
	W1222 01:39:57.352731 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:39:57.352741 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:39:57.352754 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:39:57.410232 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:39:57.410271 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:39:57.425451 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:39:57.425481 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:39:57.498123 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:39:57.489260    2529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:57.490135    2529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:57.491903    2529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:57.492235    2529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:57.493977    2529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:39:57.489260    2529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:57.490135    2529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:57.491903    2529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:57.492235    2529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:57.493977    2529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:39:57.498197 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:39:57.498226 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:39:57.530586 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:39:57.530677 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:40:00.062361 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:00.152699 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:00.152784 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:00.243584 1685746 cri.go:96] found id: ""
	I1222 01:40:00.243618 1685746 logs.go:282] 0 containers: []
	W1222 01:40:00.243635 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:00.243645 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:00.243728 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:00.323644 1685746 cri.go:96] found id: ""
	I1222 01:40:00.323704 1685746 logs.go:282] 0 containers: []
	W1222 01:40:00.323720 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:00.323730 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:00.323805 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:00.411473 1685746 cri.go:96] found id: ""
	I1222 01:40:00.411502 1685746 logs.go:282] 0 containers: []
	W1222 01:40:00.411521 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:00.411532 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:00.411621 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:00.511894 1685746 cri.go:96] found id: ""
	I1222 01:40:00.511922 1685746 logs.go:282] 0 containers: []
	W1222 01:40:00.511933 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:00.511941 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:00.512015 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:00.575706 1685746 cri.go:96] found id: ""
	I1222 01:40:00.575736 1685746 logs.go:282] 0 containers: []
	W1222 01:40:00.575746 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:00.575753 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:00.575828 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:00.666886 1685746 cri.go:96] found id: ""
	I1222 01:40:00.666913 1685746 logs.go:282] 0 containers: []
	W1222 01:40:00.666922 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:00.666929 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:00.667011 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:00.704456 1685746 cri.go:96] found id: ""
	I1222 01:40:00.704490 1685746 logs.go:282] 0 containers: []
	W1222 01:40:00.704499 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:00.704513 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:00.704583 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:00.763369 1685746 cri.go:96] found id: ""
	I1222 01:40:00.763404 1685746 logs.go:282] 0 containers: []
	W1222 01:40:00.763415 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:00.763425 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:00.763439 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:00.822507 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:00.822546 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:40:00.839492 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:00.839529 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:00.911350 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:00.902540    2639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:00.903387    2639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:00.905036    2639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:00.905557    2639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:00.907072    2639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:00.902540    2639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:00.903387    2639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:00.905036    2639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:00.905557    2639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:00.907072    2639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:00.911374 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:00.911389 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:40:00.937901 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:00.937953 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:40:01.674108 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:39:58.748290 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:40:00.756406 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:40:01.748211 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:40:01.748257 1685746 retry.go:84] will retry after 28.8s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:40:03.469297 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:03.480071 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:03.480145 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:03.519512 1685746 cri.go:96] found id: ""
	I1222 01:40:03.519627 1685746 logs.go:282] 0 containers: []
	W1222 01:40:03.519661 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:03.519709 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:03.520078 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:03.555737 1685746 cri.go:96] found id: ""
	I1222 01:40:03.555763 1685746 logs.go:282] 0 containers: []
	W1222 01:40:03.555806 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:03.555819 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:03.555909 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:03.580955 1685746 cri.go:96] found id: ""
	I1222 01:40:03.580986 1685746 logs.go:282] 0 containers: []
	W1222 01:40:03.580995 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:03.581004 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:03.581068 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:03.610855 1685746 cri.go:96] found id: ""
	I1222 01:40:03.610935 1685746 logs.go:282] 0 containers: []
	W1222 01:40:03.610952 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:03.610961 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:03.611037 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:03.635994 1685746 cri.go:96] found id: ""
	I1222 01:40:03.636019 1685746 logs.go:282] 0 containers: []
	W1222 01:40:03.636027 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:03.636033 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:03.636103 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:03.661008 1685746 cri.go:96] found id: ""
	I1222 01:40:03.661086 1685746 logs.go:282] 0 containers: []
	W1222 01:40:03.661109 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:03.661132 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:03.661249 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:03.685551 1685746 cri.go:96] found id: ""
	I1222 01:40:03.685577 1685746 logs.go:282] 0 containers: []
	W1222 01:40:03.685586 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:03.685594 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:03.685653 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:03.710025 1685746 cri.go:96] found id: ""
	I1222 01:40:03.710054 1685746 logs.go:282] 0 containers: []
	W1222 01:40:03.710063 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:03.710073 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:03.710109 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:40:03.748992 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:03.749066 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:03.812952 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:03.812990 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:40:03.828176 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:03.828207 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:03.895557 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:03.885860    2768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:03.886437    2768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:03.889658    2768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:03.890260    2768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:03.891906    2768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:03.885860    2768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:03.886437    2768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:03.889658    2768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:03.890260    2768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:03.891906    2768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:03.895583 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:03.895596 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:40:06.421124 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:06.432321 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:06.432435 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:06.458845 1685746 cri.go:96] found id: ""
	I1222 01:40:06.458926 1685746 logs.go:282] 0 containers: []
	W1222 01:40:06.458944 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:06.458951 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:06.459024 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:06.483853 1685746 cri.go:96] found id: ""
	I1222 01:40:06.483881 1685746 logs.go:282] 0 containers: []
	W1222 01:40:06.483890 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:06.483897 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:06.483956 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:06.518710 1685746 cri.go:96] found id: ""
	I1222 01:40:06.518741 1685746 logs.go:282] 0 containers: []
	W1222 01:40:06.518750 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:06.518757 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:06.518821 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:06.549152 1685746 cri.go:96] found id: ""
	I1222 01:40:06.549183 1685746 logs.go:282] 0 containers: []
	W1222 01:40:06.549191 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:06.549198 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:06.549256 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:06.579003 1685746 cri.go:96] found id: ""
	I1222 01:40:06.579032 1685746 logs.go:282] 0 containers: []
	W1222 01:40:06.579041 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:06.579048 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:06.579110 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:06.614999 1685746 cri.go:96] found id: ""
	I1222 01:40:06.615029 1685746 logs.go:282] 0 containers: []
	W1222 01:40:06.615038 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:06.615045 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:06.615109 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:06.644049 1685746 cri.go:96] found id: ""
	I1222 01:40:06.644073 1685746 logs.go:282] 0 containers: []
	W1222 01:40:06.644082 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:06.644088 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:06.644150 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:06.670551 1685746 cri.go:96] found id: ""
	I1222 01:40:06.670580 1685746 logs.go:282] 0 containers: []
	W1222 01:40:06.670590 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:06.670599 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:06.670630 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1222 01:40:03.248649 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:40:05.249130 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:40:07.749016 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:40:06.696127 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:06.696164 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:40:06.728583 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:06.728612 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:06.788068 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:06.788103 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:40:06.805676 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:06.805708 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:06.875097 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:06.866896    2880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:06.867468    2880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:06.869095    2880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:06.869679    2880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:06.871268    2880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:06.866896    2880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:06.867468    2880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:06.869095    2880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:06.869679    2880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:06.871268    2880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:09.375863 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:09.386805 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:09.386883 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:09.413272 1685746 cri.go:96] found id: ""
	I1222 01:40:09.413299 1685746 logs.go:282] 0 containers: []
	W1222 01:40:09.413307 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:09.413313 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:09.413374 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:09.438591 1685746 cri.go:96] found id: ""
	I1222 01:40:09.438615 1685746 logs.go:282] 0 containers: []
	W1222 01:40:09.438623 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:09.438630 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:09.438692 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:09.463919 1685746 cri.go:96] found id: ""
	I1222 01:40:09.463943 1685746 logs.go:282] 0 containers: []
	W1222 01:40:09.463952 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:09.463959 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:09.464026 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:09.493604 1685746 cri.go:96] found id: ""
	I1222 01:40:09.493627 1685746 logs.go:282] 0 containers: []
	W1222 01:40:09.493641 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:09.493648 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:09.493707 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:09.529370 1685746 cri.go:96] found id: ""
	I1222 01:40:09.529394 1685746 logs.go:282] 0 containers: []
	W1222 01:40:09.529404 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:09.529411 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:09.529477 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:09.562121 1685746 cri.go:96] found id: ""
	I1222 01:40:09.562150 1685746 logs.go:282] 0 containers: []
	W1222 01:40:09.562160 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:09.562167 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:09.562233 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:09.587896 1685746 cri.go:96] found id: ""
	I1222 01:40:09.587924 1685746 logs.go:282] 0 containers: []
	W1222 01:40:09.587935 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:09.587942 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:09.588010 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:09.613576 1685746 cri.go:96] found id: ""
	I1222 01:40:09.613600 1685746 logs.go:282] 0 containers: []
	W1222 01:40:09.613609 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:09.613619 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:09.613630 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:09.671590 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:09.671627 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:40:09.688438 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:09.688468 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:09.770484 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:09.761524    2979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:09.762731    2979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:09.764535    2979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:09.764916    2979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:09.766502    2979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:09.761524    2979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:09.762731    2979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:09.764535    2979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:09.764916    2979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:09.766502    2979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:09.770797 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:09.770834 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:40:09.803134 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:09.803237 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 01:40:10.247989 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:40:12.248610 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:40:12.334803 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:12.345660 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:12.345780 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:12.375026 1685746 cri.go:96] found id: ""
	I1222 01:40:12.375056 1685746 logs.go:282] 0 containers: []
	W1222 01:40:12.375067 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:12.375075 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:12.375154 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:12.400255 1685746 cri.go:96] found id: ""
	I1222 01:40:12.400282 1685746 logs.go:282] 0 containers: []
	W1222 01:40:12.400291 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:12.400299 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:12.400402 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:12.425430 1685746 cri.go:96] found id: ""
	I1222 01:40:12.425458 1685746 logs.go:282] 0 containers: []
	W1222 01:40:12.425467 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:12.425474 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:12.425535 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:12.450734 1685746 cri.go:96] found id: ""
	I1222 01:40:12.450816 1685746 logs.go:282] 0 containers: []
	W1222 01:40:12.450832 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:12.450841 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:12.450918 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:12.477690 1685746 cri.go:96] found id: ""
	I1222 01:40:12.477719 1685746 logs.go:282] 0 containers: []
	W1222 01:40:12.477735 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:12.477742 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:12.477803 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:12.517751 1685746 cri.go:96] found id: ""
	I1222 01:40:12.517779 1685746 logs.go:282] 0 containers: []
	W1222 01:40:12.517787 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:12.517794 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:12.517858 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:12.544749 1685746 cri.go:96] found id: ""
	I1222 01:40:12.544777 1685746 logs.go:282] 0 containers: []
	W1222 01:40:12.544786 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:12.544793 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:12.544858 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:12.576758 1685746 cri.go:96] found id: ""
	I1222 01:40:12.576786 1685746 logs.go:282] 0 containers: []
	W1222 01:40:12.576795 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:12.576805 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:12.576816 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:40:12.592450 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:12.592478 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:12.658073 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:12.649657    3090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:12.650391    3090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:12.652114    3090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:12.652710    3090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:12.654319    3090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:12.649657    3090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:12.650391    3090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:12.652114    3090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:12.652710    3090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:12.654319    3090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:12.658125 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:12.658138 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:40:12.683599 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:12.683637 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:40:12.715675 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:12.715707 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:15.275108 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:15.285651 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:15.285724 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:15.311249 1685746 cri.go:96] found id: ""
	I1222 01:40:15.311277 1685746 logs.go:282] 0 containers: []
	W1222 01:40:15.311287 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:15.311293 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:15.311353 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:15.336192 1685746 cri.go:96] found id: ""
	I1222 01:40:15.336218 1685746 logs.go:282] 0 containers: []
	W1222 01:40:15.336226 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:15.336234 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:15.336297 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:15.362231 1685746 cri.go:96] found id: ""
	I1222 01:40:15.362254 1685746 logs.go:282] 0 containers: []
	W1222 01:40:15.362263 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:15.362269 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:15.362331 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:15.390149 1685746 cri.go:96] found id: ""
	I1222 01:40:15.390176 1685746 logs.go:282] 0 containers: []
	W1222 01:40:15.390185 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:15.390192 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:15.390259 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:15.417421 1685746 cri.go:96] found id: ""
	I1222 01:40:15.417446 1685746 logs.go:282] 0 containers: []
	W1222 01:40:15.417456 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:15.417464 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:15.417530 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:15.444318 1685746 cri.go:96] found id: ""
	I1222 01:40:15.444346 1685746 logs.go:282] 0 containers: []
	W1222 01:40:15.444356 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:15.444368 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:15.444428 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:15.469475 1685746 cri.go:96] found id: ""
	I1222 01:40:15.469503 1685746 logs.go:282] 0 containers: []
	W1222 01:40:15.469512 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:15.469520 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:15.469581 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:15.501561 1685746 cri.go:96] found id: ""
	I1222 01:40:15.501588 1685746 logs.go:282] 0 containers: []
	W1222 01:40:15.501597 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:15.501606 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:15.501637 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:40:15.518032 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:15.518062 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:15.588024 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:15.580432    3204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:15.581037    3204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:15.582843    3204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:15.583305    3204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:15.584373    3204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:15.580432    3204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:15.581037    3204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:15.582843    3204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:15.583305    3204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:15.584373    3204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:15.588049 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:15.588062 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:40:15.613914 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:15.613953 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:40:15.645712 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:15.645739 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1222 01:40:14.747949 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:40:16.749012 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:40:18.200926 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:18.211578 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:18.211651 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:18.237396 1685746 cri.go:96] found id: ""
	I1222 01:40:18.237421 1685746 logs.go:282] 0 containers: []
	W1222 01:40:18.237429 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:18.237436 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:18.237503 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:18.264313 1685746 cri.go:96] found id: ""
	I1222 01:40:18.264345 1685746 logs.go:282] 0 containers: []
	W1222 01:40:18.264356 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:18.264369 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:18.264451 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:18.290240 1685746 cri.go:96] found id: ""
	I1222 01:40:18.290265 1685746 logs.go:282] 0 containers: []
	W1222 01:40:18.290274 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:18.290281 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:18.290340 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:18.315874 1685746 cri.go:96] found id: ""
	I1222 01:40:18.315898 1685746 logs.go:282] 0 containers: []
	W1222 01:40:18.315907 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:18.315914 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:18.315975 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:18.340813 1685746 cri.go:96] found id: ""
	I1222 01:40:18.340836 1685746 logs.go:282] 0 containers: []
	W1222 01:40:18.340844 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:18.340852 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:18.340912 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:18.368094 1685746 cri.go:96] found id: ""
	I1222 01:40:18.368119 1685746 logs.go:282] 0 containers: []
	W1222 01:40:18.368128 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:18.368135 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:18.368251 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:18.393525 1685746 cri.go:96] found id: ""
	I1222 01:40:18.393551 1685746 logs.go:282] 0 containers: []
	W1222 01:40:18.393559 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:18.393566 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:18.393629 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:18.419984 1685746 cri.go:96] found id: ""
	I1222 01:40:18.420011 1685746 logs.go:282] 0 containers: []
	W1222 01:40:18.420020 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:18.420031 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:18.420043 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:40:18.435061 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:18.435090 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:18.511216 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:18.502272    3309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:18.503063    3309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:18.504790    3309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:18.505424    3309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:18.507123    3309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:18.502272    3309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:18.503063    3309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:18.504790    3309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:18.505424    3309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:18.507123    3309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:18.511242 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:18.511258 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:40:18.539215 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:18.539253 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:40:18.571721 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:18.571752 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:21.133335 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:21.144470 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:21.144552 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:21.170402 1685746 cri.go:96] found id: ""
	I1222 01:40:21.170435 1685746 logs.go:282] 0 containers: []
	W1222 01:40:21.170444 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:21.170451 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:21.170514 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:21.197647 1685746 cri.go:96] found id: ""
	I1222 01:40:21.197674 1685746 logs.go:282] 0 containers: []
	W1222 01:40:21.197683 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:21.197690 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:21.197754 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:21.231085 1685746 cri.go:96] found id: ""
	I1222 01:40:21.231120 1685746 logs.go:282] 0 containers: []
	W1222 01:40:21.231130 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:21.231137 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:21.231243 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:21.268085 1685746 cri.go:96] found id: ""
	I1222 01:40:21.268112 1685746 logs.go:282] 0 containers: []
	W1222 01:40:21.268121 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:21.268129 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:21.268195 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:21.293752 1685746 cri.go:96] found id: ""
	I1222 01:40:21.293781 1685746 logs.go:282] 0 containers: []
	W1222 01:40:21.293791 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:21.293797 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:21.293864 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:21.320171 1685746 cri.go:96] found id: ""
	I1222 01:40:21.320195 1685746 logs.go:282] 0 containers: []
	W1222 01:40:21.320203 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:21.320210 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:21.320273 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:21.346069 1685746 cri.go:96] found id: ""
	I1222 01:40:21.346162 1685746 logs.go:282] 0 containers: []
	W1222 01:40:21.346177 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:21.346185 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:21.346246 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:21.371416 1685746 cri.go:96] found id: ""
	I1222 01:40:21.371443 1685746 logs.go:282] 0 containers: []
	W1222 01:40:21.371452 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:21.371462 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:21.371475 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:40:21.404674 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:21.404703 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:21.460348 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:21.460388 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:40:21.475958 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:21.475994 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:21.561495 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:21.552344    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:21.553051    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:21.555518    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:21.555905    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:21.557453    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:21.552344    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:21.553051    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:21.555518    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:21.555905    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:21.557453    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:21.561520 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:21.561533 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1222 01:40:19.248000 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:40:21.248090 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:40:24.089244 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:24.100814 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:24.100889 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:24.126847 1685746 cri.go:96] found id: ""
	I1222 01:40:24.126878 1685746 logs.go:282] 0 containers: []
	W1222 01:40:24.126888 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:24.126895 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:24.126959 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:24.152740 1685746 cri.go:96] found id: ""
	I1222 01:40:24.152768 1685746 logs.go:282] 0 containers: []
	W1222 01:40:24.152778 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:24.152784 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:24.152845 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:24.178506 1685746 cri.go:96] found id: ""
	I1222 01:40:24.178532 1685746 logs.go:282] 0 containers: []
	W1222 01:40:24.178540 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:24.178547 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:24.178628 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:24.210111 1685746 cri.go:96] found id: ""
	I1222 01:40:24.210138 1685746 logs.go:282] 0 containers: []
	W1222 01:40:24.210147 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:24.210156 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:24.210219 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:24.234336 1685746 cri.go:96] found id: ""
	I1222 01:40:24.234358 1685746 logs.go:282] 0 containers: []
	W1222 01:40:24.234372 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:24.234379 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:24.234440 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:24.259792 1685746 cri.go:96] found id: ""
	I1222 01:40:24.259861 1685746 logs.go:282] 0 containers: []
	W1222 01:40:24.259884 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:24.259898 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:24.259973 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:24.285594 1685746 cri.go:96] found id: ""
	I1222 01:40:24.285623 1685746 logs.go:282] 0 containers: []
	W1222 01:40:24.285632 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:24.285639 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:24.285722 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:24.312027 1685746 cri.go:96] found id: ""
	I1222 01:40:24.312055 1685746 logs.go:282] 0 containers: []
	W1222 01:40:24.312064 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:24.312074 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:24.312088 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:40:24.345845 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:24.345873 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:24.404101 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:24.404140 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:40:24.419436 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:24.419465 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:24.485147 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:24.477123    3551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:24.477548    3551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:24.479053    3551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:24.479387    3551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:24.480817    3551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:24.477123    3551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:24.477548    3551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:24.479053    3551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:24.479387    3551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:24.480817    3551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:24.485182 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:24.485195 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:40:25.275578 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:40:25.338578 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:40:25.338685 1685746 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1222 01:40:23.248426 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:40:25.748112 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:40:27.748979 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:40:27.016338 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:27.030615 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:27.030685 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:27.060751 1685746 cri.go:96] found id: ""
	I1222 01:40:27.060775 1685746 logs.go:282] 0 containers: []
	W1222 01:40:27.060784 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:27.060791 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:27.060850 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:27.088784 1685746 cri.go:96] found id: ""
	I1222 01:40:27.088807 1685746 logs.go:282] 0 containers: []
	W1222 01:40:27.088816 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:27.088822 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:27.088889 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:27.115559 1685746 cri.go:96] found id: ""
	I1222 01:40:27.115581 1685746 logs.go:282] 0 containers: []
	W1222 01:40:27.115590 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:27.115596 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:27.115658 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:27.141509 1685746 cri.go:96] found id: ""
	I1222 01:40:27.141579 1685746 logs.go:282] 0 containers: []
	W1222 01:40:27.141602 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:27.141624 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:27.141712 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:27.168944 1685746 cri.go:96] found id: ""
	I1222 01:40:27.168984 1685746 logs.go:282] 0 containers: []
	W1222 01:40:27.168993 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:27.169006 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:27.169076 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:27.194554 1685746 cri.go:96] found id: ""
	I1222 01:40:27.194584 1685746 logs.go:282] 0 containers: []
	W1222 01:40:27.194593 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:27.194599 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:27.194662 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:27.219603 1685746 cri.go:96] found id: ""
	I1222 01:40:27.219684 1685746 logs.go:282] 0 containers: []
	W1222 01:40:27.219707 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:27.219721 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:27.219801 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:27.246999 1685746 cri.go:96] found id: ""
	I1222 01:40:27.247033 1685746 logs.go:282] 0 containers: []
	W1222 01:40:27.247042 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:27.247067 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:27.247087 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:27.302977 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:27.303012 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:40:27.318364 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:27.318398 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:27.385339 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:27.376631    3659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:27.377224    3659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:27.379740    3659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:27.380579    3659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:27.381699    3659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:27.376631    3659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:27.377224    3659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:27.379740    3659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:27.380579    3659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:27.381699    3659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:27.385413 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:27.385442 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:40:27.411346 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:27.411384 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:40:29.941731 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:29.955808 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:29.955883 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:29.982684 1685746 cri.go:96] found id: ""
	I1222 01:40:29.982709 1685746 logs.go:282] 0 containers: []
	W1222 01:40:29.982718 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:29.982725 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:29.982796 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:30.036793 1685746 cri.go:96] found id: ""
	I1222 01:40:30.036836 1685746 logs.go:282] 0 containers: []
	W1222 01:40:30.036847 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:30.036858 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:30.036986 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:30.127706 1685746 cri.go:96] found id: ""
	I1222 01:40:30.127740 1685746 logs.go:282] 0 containers: []
	W1222 01:40:30.127750 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:30.127757 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:30.127828 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:30.158476 1685746 cri.go:96] found id: ""
	I1222 01:40:30.158509 1685746 logs.go:282] 0 containers: []
	W1222 01:40:30.158521 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:30.158529 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:30.158598 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:30.187425 1685746 cri.go:96] found id: ""
	I1222 01:40:30.187453 1685746 logs.go:282] 0 containers: []
	W1222 01:40:30.187463 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:30.187470 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:30.187539 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:30.216013 1685746 cri.go:96] found id: ""
	I1222 01:40:30.216043 1685746 logs.go:282] 0 containers: []
	W1222 01:40:30.216052 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:30.216060 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:30.216125 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:30.241947 1685746 cri.go:96] found id: ""
	I1222 01:40:30.241975 1685746 logs.go:282] 0 containers: []
	W1222 01:40:30.241985 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:30.241991 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:30.242074 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:30.271569 1685746 cri.go:96] found id: ""
	I1222 01:40:30.271595 1685746 logs.go:282] 0 containers: []
	W1222 01:40:30.271603 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:30.271613 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:30.271625 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:30.327858 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:30.327896 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:40:30.343479 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:30.343505 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:30.411657 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:30.402221    3771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:30.402895    3771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:30.404480    3771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:30.404791    3771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:30.407032    3771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:30.402221    3771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:30.402895    3771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:30.404480    3771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:30.404791    3771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:30.407032    3771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:30.411678 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:30.411692 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:40:30.436851 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:30.436886 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:40:30.511390 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:40:30.582457 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:40:30.582560 1685746 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1222 01:40:30.587532 1685746 out.go:179] * Enabled addons: 
	I1222 01:40:30.590426 1685746 addons.go:530] duration metric: took 1m51.812167431s for enable addons: enabled=[]
	W1222 01:40:30.247997 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:40:32.248097 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:40:32.969406 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:32.980360 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:32.980444 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:33.016753 1685746 cri.go:96] found id: ""
	I1222 01:40:33.016778 1685746 logs.go:282] 0 containers: []
	W1222 01:40:33.016787 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:33.016795 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:33.016881 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:33.053288 1685746 cri.go:96] found id: ""
	I1222 01:40:33.053315 1685746 logs.go:282] 0 containers: []
	W1222 01:40:33.053334 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:33.053358 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:33.053457 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:33.087392 1685746 cri.go:96] found id: ""
	I1222 01:40:33.087417 1685746 logs.go:282] 0 containers: []
	W1222 01:40:33.087426 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:33.087432 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:33.087492 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:33.113564 1685746 cri.go:96] found id: ""
	I1222 01:40:33.113595 1685746 logs.go:282] 0 containers: []
	W1222 01:40:33.113604 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:33.113611 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:33.113698 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:33.143733 1685746 cri.go:96] found id: ""
	I1222 01:40:33.143757 1685746 logs.go:282] 0 containers: []
	W1222 01:40:33.143766 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:33.143772 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:33.143835 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:33.169776 1685746 cri.go:96] found id: ""
	I1222 01:40:33.169808 1685746 logs.go:282] 0 containers: []
	W1222 01:40:33.169816 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:33.169824 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:33.169887 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:33.198413 1685746 cri.go:96] found id: ""
	I1222 01:40:33.198438 1685746 logs.go:282] 0 containers: []
	W1222 01:40:33.198446 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:33.198453 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:33.198514 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:33.223746 1685746 cri.go:96] found id: ""
	I1222 01:40:33.223816 1685746 logs.go:282] 0 containers: []
	W1222 01:40:33.223838 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:33.223855 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:33.223866 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:40:33.249217 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:33.249247 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:40:33.282243 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:33.282269 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:33.340677 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:33.340714 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:40:33.355635 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:33.355667 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:33.438690 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:33.431030    3900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:33.431616    3900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:33.433134    3900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:33.433634    3900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:33.435107    3900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:33.431030    3900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:33.431616    3900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:33.433134    3900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:33.433634    3900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:33.435107    3900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:35.940454 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:35.954241 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:35.954312 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:35.979549 1685746 cri.go:96] found id: ""
	I1222 01:40:35.979576 1685746 logs.go:282] 0 containers: []
	W1222 01:40:35.979585 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:35.979592 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:35.979654 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:36.010177 1685746 cri.go:96] found id: ""
	I1222 01:40:36.010207 1685746 logs.go:282] 0 containers: []
	W1222 01:40:36.010217 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:36.010224 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:36.010295 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:36.045048 1685746 cri.go:96] found id: ""
	I1222 01:40:36.045078 1685746 logs.go:282] 0 containers: []
	W1222 01:40:36.045088 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:36.045095 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:36.045157 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:36.074866 1685746 cri.go:96] found id: ""
	I1222 01:40:36.074889 1685746 logs.go:282] 0 containers: []
	W1222 01:40:36.074897 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:36.074903 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:36.074965 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:36.101425 1685746 cri.go:96] found id: ""
	I1222 01:40:36.101499 1685746 logs.go:282] 0 containers: []
	W1222 01:40:36.101511 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:36.101518 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:36.106750 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:36.134167 1685746 cri.go:96] found id: ""
	I1222 01:40:36.134205 1685746 logs.go:282] 0 containers: []
	W1222 01:40:36.134215 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:36.134223 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:36.134288 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:36.159767 1685746 cri.go:96] found id: ""
	I1222 01:40:36.159792 1685746 logs.go:282] 0 containers: []
	W1222 01:40:36.159802 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:36.159809 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:36.159873 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:36.188878 1685746 cri.go:96] found id: ""
	I1222 01:40:36.188907 1685746 logs.go:282] 0 containers: []
	W1222 01:40:36.188917 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:36.188928 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:36.188941 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:36.253797 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:36.245184    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:36.246059    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:36.247790    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:36.248134    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:36.249657    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:36.245184    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:36.246059    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:36.247790    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:36.248134    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:36.249657    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:36.253877 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:36.253906 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:40:36.279371 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:36.279408 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:40:36.308866 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:36.308901 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:36.365568 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:36.365603 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1222 01:40:34.248867 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:40:36.748755 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:40:38.881766 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:38.892862 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:38.892944 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:38.919366 1685746 cri.go:96] found id: ""
	I1222 01:40:38.919399 1685746 logs.go:282] 0 containers: []
	W1222 01:40:38.919409 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:38.919421 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:38.919495 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:38.953015 1685746 cri.go:96] found id: ""
	I1222 01:40:38.953042 1685746 logs.go:282] 0 containers: []
	W1222 01:40:38.953051 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:38.953058 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:38.953121 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:38.979133 1685746 cri.go:96] found id: ""
	I1222 01:40:38.979158 1685746 logs.go:282] 0 containers: []
	W1222 01:40:38.979167 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:38.979173 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:38.979236 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:39.017688 1685746 cri.go:96] found id: ""
	I1222 01:40:39.017714 1685746 logs.go:282] 0 containers: []
	W1222 01:40:39.017724 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:39.017735 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:39.017797 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:39.056591 1685746 cri.go:96] found id: ""
	I1222 01:40:39.056614 1685746 logs.go:282] 0 containers: []
	W1222 01:40:39.056622 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:39.056629 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:39.056686 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:39.085085 1685746 cri.go:96] found id: ""
	I1222 01:40:39.085155 1685746 logs.go:282] 0 containers: []
	W1222 01:40:39.085177 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:39.085199 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:39.085296 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:39.114614 1685746 cri.go:96] found id: ""
	I1222 01:40:39.114640 1685746 logs.go:282] 0 containers: []
	W1222 01:40:39.114649 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:39.114656 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:39.114738 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:39.140466 1685746 cri.go:96] found id: ""
	I1222 01:40:39.140511 1685746 logs.go:282] 0 containers: []
	W1222 01:40:39.140520 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:39.140545 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:39.140564 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:39.208956 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:39.200655    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:39.201282    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:39.202774    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:39.203325    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:39.204797    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:39.200655    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:39.201282    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:39.202774    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:39.203325    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:39.204797    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:39.208979 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:39.208992 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:40:39.234396 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:39.234430 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:40:39.264983 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:39.265011 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:39.320138 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:39.320173 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1222 01:40:38.748943 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:40:41.248791 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:40:41.835978 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:41.846958 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:41.847061 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:41.872281 1685746 cri.go:96] found id: ""
	I1222 01:40:41.872307 1685746 logs.go:282] 0 containers: []
	W1222 01:40:41.872318 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:41.872324 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:41.872429 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:41.902068 1685746 cri.go:96] found id: ""
	I1222 01:40:41.902127 1685746 logs.go:282] 0 containers: []
	W1222 01:40:41.902137 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:41.902163 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:41.902275 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:41.936505 1685746 cri.go:96] found id: ""
	I1222 01:40:41.936535 1685746 logs.go:282] 0 containers: []
	W1222 01:40:41.936544 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:41.936550 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:41.936615 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:41.961446 1685746 cri.go:96] found id: ""
	I1222 01:40:41.961480 1685746 logs.go:282] 0 containers: []
	W1222 01:40:41.961489 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:41.961496 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:41.961569 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:41.989500 1685746 cri.go:96] found id: ""
	I1222 01:40:41.989582 1685746 logs.go:282] 0 containers: []
	W1222 01:40:41.989606 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:41.989631 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:41.989730 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:42.028918 1685746 cri.go:96] found id: ""
	I1222 01:40:42.028947 1685746 logs.go:282] 0 containers: []
	W1222 01:40:42.028956 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:42.028963 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:42.029037 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:42.065570 1685746 cri.go:96] found id: ""
	I1222 01:40:42.065618 1685746 logs.go:282] 0 containers: []
	W1222 01:40:42.065633 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:42.065641 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:42.065724 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:42.095634 1685746 cri.go:96] found id: ""
	I1222 01:40:42.095661 1685746 logs.go:282] 0 containers: []
	W1222 01:40:42.095671 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:42.095681 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:42.095702 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:42.158126 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:42.158170 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:40:42.175600 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:42.175640 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:42.256856 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:42.248823    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:42.249305    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:42.250890    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:42.251275    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:42.252835    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:42.248823    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:42.249305    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:42.250890    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:42.251275    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:42.252835    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:42.256882 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:42.256896 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:40:42.283618 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:42.283665 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:40:44.813189 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:44.824766 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:44.824836 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:44.853167 1685746 cri.go:96] found id: ""
	I1222 01:40:44.853192 1685746 logs.go:282] 0 containers: []
	W1222 01:40:44.853201 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:44.853208 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:44.853269 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:44.878679 1685746 cri.go:96] found id: ""
	I1222 01:40:44.878711 1685746 logs.go:282] 0 containers: []
	W1222 01:40:44.878721 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:44.878728 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:44.878792 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:44.905070 1685746 cri.go:96] found id: ""
	I1222 01:40:44.905097 1685746 logs.go:282] 0 containers: []
	W1222 01:40:44.905106 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:44.905113 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:44.905177 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:44.930494 1685746 cri.go:96] found id: ""
	I1222 01:40:44.930523 1685746 logs.go:282] 0 containers: []
	W1222 01:40:44.930533 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:44.930539 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:44.930599 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:44.960159 1685746 cri.go:96] found id: ""
	I1222 01:40:44.960187 1685746 logs.go:282] 0 containers: []
	W1222 01:40:44.960196 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:44.960203 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:44.960308 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:44.985038 1685746 cri.go:96] found id: ""
	I1222 01:40:44.985066 1685746 logs.go:282] 0 containers: []
	W1222 01:40:44.985076 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:44.985083 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:44.985147 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:45.046474 1685746 cri.go:96] found id: ""
	I1222 01:40:45.046501 1685746 logs.go:282] 0 containers: []
	W1222 01:40:45.046511 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:45.046518 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:45.046590 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:45.111231 1685746 cri.go:96] found id: ""
	I1222 01:40:45.111266 1685746 logs.go:282] 0 containers: []
	W1222 01:40:45.111275 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:45.111286 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:45.111299 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:45.180293 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:45.180418 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:40:45.231743 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:45.231786 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:45.318004 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:45.307287    4338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:45.308090    4338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:45.309847    4338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:45.310539    4338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:45.312128    4338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:45.307287    4338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:45.308090    4338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:45.309847    4338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:45.310539    4338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:45.312128    4338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:45.318031 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:45.318045 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:40:45.351434 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:45.351474 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 01:40:43.748820 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:40:45.748974 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:40:47.885492 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:47.896303 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:47.896380 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:47.927221 1685746 cri.go:96] found id: ""
	I1222 01:40:47.927247 1685746 logs.go:282] 0 containers: []
	W1222 01:40:47.927257 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:47.927264 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:47.927326 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:47.955055 1685746 cri.go:96] found id: ""
	I1222 01:40:47.955082 1685746 logs.go:282] 0 containers: []
	W1222 01:40:47.955091 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:47.955098 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:47.955167 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:47.982730 1685746 cri.go:96] found id: ""
	I1222 01:40:47.982760 1685746 logs.go:282] 0 containers: []
	W1222 01:40:47.982770 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:47.982777 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:47.982841 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:48.013060 1685746 cri.go:96] found id: ""
	I1222 01:40:48.013093 1685746 logs.go:282] 0 containers: []
	W1222 01:40:48.013104 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:48.013111 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:48.013184 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:48.044824 1685746 cri.go:96] found id: ""
	I1222 01:40:48.044902 1685746 logs.go:282] 0 containers: []
	W1222 01:40:48.044918 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:48.044926 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:48.044994 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:48.077777 1685746 cri.go:96] found id: ""
	I1222 01:40:48.077806 1685746 logs.go:282] 0 containers: []
	W1222 01:40:48.077816 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:48.077822 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:48.077887 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:48.108631 1685746 cri.go:96] found id: ""
	I1222 01:40:48.108659 1685746 logs.go:282] 0 containers: []
	W1222 01:40:48.108669 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:48.108676 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:48.108767 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:48.135002 1685746 cri.go:96] found id: ""
	I1222 01:40:48.135035 1685746 logs.go:282] 0 containers: []
	W1222 01:40:48.135045 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:48.135056 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:48.135092 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:48.192262 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:48.192299 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:40:48.207972 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:48.208074 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:48.295537 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:48.286431    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:48.287217    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:48.288971    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:48.289714    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:48.290868    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:48.286431    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:48.287217    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:48.288971    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:48.289714    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:48.290868    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:48.295563 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:48.295583 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:40:48.322629 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:48.322665 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:40:50.857236 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:50.868315 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:50.868396 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:50.894289 1685746 cri.go:96] found id: ""
	I1222 01:40:50.894337 1685746 logs.go:282] 0 containers: []
	W1222 01:40:50.894346 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:50.894353 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:50.894414 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:50.920265 1685746 cri.go:96] found id: ""
	I1222 01:40:50.920288 1685746 logs.go:282] 0 containers: []
	W1222 01:40:50.920297 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:50.920303 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:50.920362 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:50.946413 1685746 cri.go:96] found id: ""
	I1222 01:40:50.946437 1685746 logs.go:282] 0 containers: []
	W1222 01:40:50.946445 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:50.946452 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:50.946511 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:50.973167 1685746 cri.go:96] found id: ""
	I1222 01:40:50.973192 1685746 logs.go:282] 0 containers: []
	W1222 01:40:50.973202 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:50.973209 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:50.973278 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:50.998695 1685746 cri.go:96] found id: ""
	I1222 01:40:50.998730 1685746 logs.go:282] 0 containers: []
	W1222 01:40:50.998739 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:50.998746 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:50.998812 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:51.027679 1685746 cri.go:96] found id: ""
	I1222 01:40:51.027748 1685746 logs.go:282] 0 containers: []
	W1222 01:40:51.027770 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:51.027792 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:51.027882 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:51.057709 1685746 cri.go:96] found id: ""
	I1222 01:40:51.057791 1685746 logs.go:282] 0 containers: []
	W1222 01:40:51.057816 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:51.057839 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:51.057933 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:51.085239 1685746 cri.go:96] found id: ""
	I1222 01:40:51.085311 1685746 logs.go:282] 0 containers: []
	W1222 01:40:51.085335 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:51.085361 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:51.085402 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:51.143088 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:51.143131 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:40:51.159838 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:51.159866 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:51.229894 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:51.221124    4573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:51.221892    4573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:51.223547    4573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:51.224109    4573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:51.225453    4573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:51.221124    4573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:51.221892    4573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:51.223547    4573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:51.224109    4573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:51.225453    4573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:51.229917 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:51.229932 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:40:51.258211 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:51.258321 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 01:40:48.248802 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:40:50.748310 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:40:53.799763 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:53.811321 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:53.811400 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:53.838808 1685746 cri.go:96] found id: ""
	I1222 01:40:53.838834 1685746 logs.go:282] 0 containers: []
	W1222 01:40:53.838844 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:53.838851 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:53.838918 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:53.865906 1685746 cri.go:96] found id: ""
	I1222 01:40:53.865930 1685746 logs.go:282] 0 containers: []
	W1222 01:40:53.865938 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:53.865945 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:53.866008 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:53.891986 1685746 cri.go:96] found id: ""
	I1222 01:40:53.892030 1685746 logs.go:282] 0 containers: []
	W1222 01:40:53.892040 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:53.892047 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:53.892120 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:53.918633 1685746 cri.go:96] found id: ""
	I1222 01:40:53.918660 1685746 logs.go:282] 0 containers: []
	W1222 01:40:53.918670 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:53.918677 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:53.918748 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:53.945224 1685746 cri.go:96] found id: ""
	I1222 01:40:53.945259 1685746 logs.go:282] 0 containers: []
	W1222 01:40:53.945268 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:53.945274 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:53.945345 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:53.976181 1685746 cri.go:96] found id: ""
	I1222 01:40:53.976207 1685746 logs.go:282] 0 containers: []
	W1222 01:40:53.976216 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:53.976223 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:53.976286 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:54.017529 1685746 cri.go:96] found id: ""
	I1222 01:40:54.017609 1685746 logs.go:282] 0 containers: []
	W1222 01:40:54.017633 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:54.017657 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:54.017766 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:54.050157 1685746 cri.go:96] found id: ""
	I1222 01:40:54.050234 1685746 logs.go:282] 0 containers: []
	W1222 01:40:54.050257 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:54.050284 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:54.050322 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:54.107873 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:54.107911 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:40:54.123115 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:54.123192 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:54.189938 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:54.180462    4689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:54.181036    4689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:54.182810    4689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:54.183522    4689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:54.185321    4689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:54.180462    4689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:54.181036    4689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:54.182810    4689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:54.183522    4689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:54.185321    4689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:54.189963 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:54.189976 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:40:54.216904 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:54.216959 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 01:40:53.248434 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:40:55.748007 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:40:57.748191 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:40:56.757953 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:56.769647 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:56.769793 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:56.802913 1685746 cri.go:96] found id: ""
	I1222 01:40:56.802941 1685746 logs.go:282] 0 containers: []
	W1222 01:40:56.802951 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:56.802958 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:56.803018 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:56.828625 1685746 cri.go:96] found id: ""
	I1222 01:40:56.828654 1685746 logs.go:282] 0 containers: []
	W1222 01:40:56.828664 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:56.828671 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:56.828734 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:56.853350 1685746 cri.go:96] found id: ""
	I1222 01:40:56.853378 1685746 logs.go:282] 0 containers: []
	W1222 01:40:56.853388 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:56.853394 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:56.853456 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:56.883418 1685746 cri.go:96] found id: ""
	I1222 01:40:56.883443 1685746 logs.go:282] 0 containers: []
	W1222 01:40:56.883458 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:56.883466 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:56.883532 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:56.912769 1685746 cri.go:96] found id: ""
	I1222 01:40:56.912799 1685746 logs.go:282] 0 containers: []
	W1222 01:40:56.912809 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:56.912817 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:56.912880 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:56.938494 1685746 cri.go:96] found id: ""
	I1222 01:40:56.938519 1685746 logs.go:282] 0 containers: []
	W1222 01:40:56.938529 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:56.938536 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:56.938602 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:56.968944 1685746 cri.go:96] found id: ""
	I1222 01:40:56.968978 1685746 logs.go:282] 0 containers: []
	W1222 01:40:56.968987 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:56.968994 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:56.969063 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:56.995238 1685746 cri.go:96] found id: ""
	I1222 01:40:56.995265 1685746 logs.go:282] 0 containers: []
	W1222 01:40:56.995274 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:56.995284 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:56.995295 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:40:57.022601 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:57.022641 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:40:57.055915 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:57.055993 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:57.110958 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:57.110993 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:40:57.126557 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:57.126587 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:57.199192 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:57.188757    4814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:57.189807    4814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:57.191598    4814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:57.192842    4814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:57.194605    4814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:57.188757    4814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:57.189807    4814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:57.191598    4814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:57.192842    4814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:57.194605    4814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:59.699460 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:59.709928 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:59.709999 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:59.734831 1685746 cri.go:96] found id: ""
	I1222 01:40:59.734861 1685746 logs.go:282] 0 containers: []
	W1222 01:40:59.734870 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:59.734876 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:59.734939 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:59.766737 1685746 cri.go:96] found id: ""
	I1222 01:40:59.766765 1685746 logs.go:282] 0 containers: []
	W1222 01:40:59.766773 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:59.766785 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:59.766845 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:59.800714 1685746 cri.go:96] found id: ""
	I1222 01:40:59.800742 1685746 logs.go:282] 0 containers: []
	W1222 01:40:59.800751 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:59.800757 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:59.800817 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:59.828842 1685746 cri.go:96] found id: ""
	I1222 01:40:59.828871 1685746 logs.go:282] 0 containers: []
	W1222 01:40:59.828880 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:59.828888 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:59.828951 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:59.854824 1685746 cri.go:96] found id: ""
	I1222 01:40:59.854848 1685746 logs.go:282] 0 containers: []
	W1222 01:40:59.854857 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:59.854864 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:59.854928 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:59.879691 1685746 cri.go:96] found id: ""
	I1222 01:40:59.879761 1685746 logs.go:282] 0 containers: []
	W1222 01:40:59.879784 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:59.879798 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:59.879874 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:59.905099 1685746 cri.go:96] found id: ""
	I1222 01:40:59.905136 1685746 logs.go:282] 0 containers: []
	W1222 01:40:59.905146 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:59.905152 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:59.905232 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:59.929727 1685746 cri.go:96] found id: ""
	I1222 01:40:59.929763 1685746 logs.go:282] 0 containers: []
	W1222 01:40:59.929775 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:59.929784 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:59.929794 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:59.985430 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:59.985466 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:00.001212 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:00.001238 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:00.267041 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:00.255488    4915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:00.258122    4915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:00.259215    4915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:00.260139    4915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:00.261094    4915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:00.255488    4915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:00.258122    4915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:00.259215    4915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:00.260139    4915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:00.261094    4915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:00.267072 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:00.267085 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:00.299707 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:00.299756 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 01:41:00.248610 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:41:02.248653 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:41:02.866175 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:02.877065 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:02.877139 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:02.902030 1685746 cri.go:96] found id: ""
	I1222 01:41:02.902137 1685746 logs.go:282] 0 containers: []
	W1222 01:41:02.902161 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:02.902183 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:02.902277 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:02.928023 1685746 cri.go:96] found id: ""
	I1222 01:41:02.928048 1685746 logs.go:282] 0 containers: []
	W1222 01:41:02.928058 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:02.928065 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:02.928128 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:02.958559 1685746 cri.go:96] found id: ""
	I1222 01:41:02.958595 1685746 logs.go:282] 0 containers: []
	W1222 01:41:02.958605 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:02.958612 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:02.958675 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:02.984249 1685746 cri.go:96] found id: ""
	I1222 01:41:02.984272 1685746 logs.go:282] 0 containers: []
	W1222 01:41:02.984281 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:02.984287 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:02.984355 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:03.033125 1685746 cri.go:96] found id: ""
	I1222 01:41:03.033152 1685746 logs.go:282] 0 containers: []
	W1222 01:41:03.033161 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:03.033167 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:03.033228 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:03.058557 1685746 cri.go:96] found id: ""
	I1222 01:41:03.058583 1685746 logs.go:282] 0 containers: []
	W1222 01:41:03.058591 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:03.058598 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:03.058657 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:03.089068 1685746 cri.go:96] found id: ""
	I1222 01:41:03.089112 1685746 logs.go:282] 0 containers: []
	W1222 01:41:03.089122 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:03.089132 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:03.089210 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:03.119177 1685746 cri.go:96] found id: ""
	I1222 01:41:03.119201 1685746 logs.go:282] 0 containers: []
	W1222 01:41:03.119210 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:03.119220 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:03.119231 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:03.182970 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:03.173888    5021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:03.174799    5021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:03.176762    5021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:03.177121    5021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:03.178608    5021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:03.173888    5021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:03.174799    5021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:03.176762    5021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:03.177121    5021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:03.178608    5021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:03.183000 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:03.183013 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:03.207694 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:03.207726 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:41:03.238481 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:03.238559 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:03.311496 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:03.311531 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:05.829656 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:05.840301 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:05.840394 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:05.867057 1685746 cri.go:96] found id: ""
	I1222 01:41:05.867080 1685746 logs.go:282] 0 containers: []
	W1222 01:41:05.867089 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:05.867095 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:05.867155 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:05.897184 1685746 cri.go:96] found id: ""
	I1222 01:41:05.897206 1685746 logs.go:282] 0 containers: []
	W1222 01:41:05.897215 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:05.897221 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:05.897284 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:05.922902 1685746 cri.go:96] found id: ""
	I1222 01:41:05.922924 1685746 logs.go:282] 0 containers: []
	W1222 01:41:05.922933 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:05.922940 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:05.923001 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:05.947567 1685746 cri.go:96] found id: ""
	I1222 01:41:05.947591 1685746 logs.go:282] 0 containers: []
	W1222 01:41:05.947600 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:05.947606 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:05.947725 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:05.973767 1685746 cri.go:96] found id: ""
	I1222 01:41:05.973795 1685746 logs.go:282] 0 containers: []
	W1222 01:41:05.973803 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:05.973810 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:05.973870 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:05.999045 1685746 cri.go:96] found id: ""
	I1222 01:41:05.999075 1685746 logs.go:282] 0 containers: []
	W1222 01:41:05.999084 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:05.999090 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:05.999156 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:06.037292 1685746 cri.go:96] found id: ""
	I1222 01:41:06.037323 1685746 logs.go:282] 0 containers: []
	W1222 01:41:06.037331 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:06.037338 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:06.037403 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:06.063105 1685746 cri.go:96] found id: ""
	I1222 01:41:06.063136 1685746 logs.go:282] 0 containers: []
	W1222 01:41:06.063145 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:06.063155 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:06.063166 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:06.118645 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:06.118682 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:06.134249 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:06.134283 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:06.202948 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:06.194329    5136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:06.194939    5136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:06.196573    5136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:06.197084    5136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:06.198693    5136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:06.194329    5136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:06.194939    5136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:06.196573    5136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:06.197084    5136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:06.198693    5136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:06.202967 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:06.202978 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:06.227736 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:06.227770 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 01:41:04.248851 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:41:06.748841 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:41:08.763766 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:08.776166 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:08.776292 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:08.802744 1685746 cri.go:96] found id: ""
	I1222 01:41:08.802770 1685746 logs.go:282] 0 containers: []
	W1222 01:41:08.802780 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:08.802787 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:08.802897 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:08.829155 1685746 cri.go:96] found id: ""
	I1222 01:41:08.829196 1685746 logs.go:282] 0 containers: []
	W1222 01:41:08.829205 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:08.829212 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:08.829286 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:08.853323 1685746 cri.go:96] found id: ""
	I1222 01:41:08.853358 1685746 logs.go:282] 0 containers: []
	W1222 01:41:08.853368 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:08.853374 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:08.853442 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:08.878843 1685746 cri.go:96] found id: ""
	I1222 01:41:08.878871 1685746 logs.go:282] 0 containers: []
	W1222 01:41:08.878880 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:08.878887 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:08.878948 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:08.907348 1685746 cri.go:96] found id: ""
	I1222 01:41:08.907374 1685746 logs.go:282] 0 containers: []
	W1222 01:41:08.907383 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:08.907390 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:08.907459 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:08.935980 1685746 cri.go:96] found id: ""
	I1222 01:41:08.936006 1685746 logs.go:282] 0 containers: []
	W1222 01:41:08.936015 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:08.936022 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:08.936103 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:08.965110 1685746 cri.go:96] found id: ""
	I1222 01:41:08.965149 1685746 logs.go:282] 0 containers: []
	W1222 01:41:08.965159 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:08.965165 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:08.965240 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:08.991481 1685746 cri.go:96] found id: ""
	I1222 01:41:08.991509 1685746 logs.go:282] 0 containers: []
	W1222 01:41:08.991518 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:08.991527 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:08.991539 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:09.007297 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:09.007330 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:09.077476 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:09.069572    5248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:09.070224    5248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:09.071715    5248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:09.072134    5248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:09.073635    5248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:09.069572    5248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:09.070224    5248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:09.071715    5248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:09.072134    5248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:09.073635    5248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:09.077557 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:09.077597 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:09.102923 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:09.102958 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:41:09.131422 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:09.131450 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1222 01:41:09.248676 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:41:11.748069 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:41:11.686744 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:11.697606 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:11.697689 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:11.722593 1685746 cri.go:96] found id: ""
	I1222 01:41:11.722664 1685746 logs.go:282] 0 containers: []
	W1222 01:41:11.722686 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:11.722701 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:11.722796 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:11.767413 1685746 cri.go:96] found id: ""
	I1222 01:41:11.767439 1685746 logs.go:282] 0 containers: []
	W1222 01:41:11.767448 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:11.767454 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:11.767526 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:11.800344 1685746 cri.go:96] found id: ""
	I1222 01:41:11.800433 1685746 logs.go:282] 0 containers: []
	W1222 01:41:11.800466 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:11.800487 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:11.800594 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:11.836608 1685746 cri.go:96] found id: ""
	I1222 01:41:11.836693 1685746 logs.go:282] 0 containers: []
	W1222 01:41:11.836717 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:11.836755 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:11.836854 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:11.862781 1685746 cri.go:96] found id: ""
	I1222 01:41:11.862808 1685746 logs.go:282] 0 containers: []
	W1222 01:41:11.862818 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:11.862830 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:11.862894 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:11.891376 1685746 cri.go:96] found id: ""
	I1222 01:41:11.891401 1685746 logs.go:282] 0 containers: []
	W1222 01:41:11.891410 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:11.891416 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:11.891480 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:11.920553 1685746 cri.go:96] found id: ""
	I1222 01:41:11.920581 1685746 logs.go:282] 0 containers: []
	W1222 01:41:11.920590 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:11.920596 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:11.920657 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:11.948610 1685746 cri.go:96] found id: ""
	I1222 01:41:11.948634 1685746 logs.go:282] 0 containers: []
	W1222 01:41:11.948642 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:11.948651 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:11.948662 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:12.006298 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:12.006340 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:12.022860 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:12.022889 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:12.087185 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:12.078758    5361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:12.079176    5361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:12.080965    5361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:12.081458    5361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:12.083372    5361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:12.078758    5361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:12.079176    5361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:12.080965    5361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:12.081458    5361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:12.083372    5361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:12.087252 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:12.087282 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:12.112381 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:12.112415 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:41:14.645175 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:14.655581 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:14.655655 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:14.683086 1685746 cri.go:96] found id: ""
	I1222 01:41:14.683110 1685746 logs.go:282] 0 containers: []
	W1222 01:41:14.683118 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:14.683125 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:14.683192 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:14.708684 1685746 cri.go:96] found id: ""
	I1222 01:41:14.708707 1685746 logs.go:282] 0 containers: []
	W1222 01:41:14.708716 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:14.708723 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:14.708783 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:14.733550 1685746 cri.go:96] found id: ""
	I1222 01:41:14.733572 1685746 logs.go:282] 0 containers: []
	W1222 01:41:14.733580 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:14.733586 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:14.733653 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:14.762029 1685746 cri.go:96] found id: ""
	I1222 01:41:14.762052 1685746 logs.go:282] 0 containers: []
	W1222 01:41:14.762061 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:14.762068 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:14.762191 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:14.802569 1685746 cri.go:96] found id: ""
	I1222 01:41:14.802593 1685746 logs.go:282] 0 containers: []
	W1222 01:41:14.802602 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:14.802609 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:14.802668 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:14.829402 1685746 cri.go:96] found id: ""
	I1222 01:41:14.829425 1685746 logs.go:282] 0 containers: []
	W1222 01:41:14.829434 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:14.829440 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:14.829499 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:14.854254 1685746 cri.go:96] found id: ""
	I1222 01:41:14.854276 1685746 logs.go:282] 0 containers: []
	W1222 01:41:14.854285 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:14.854291 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:14.854350 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:14.879183 1685746 cri.go:96] found id: ""
	I1222 01:41:14.879205 1685746 logs.go:282] 0 containers: []
	W1222 01:41:14.879213 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:14.879222 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:14.879239 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:14.933758 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:14.933795 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:14.948809 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:14.948834 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:15.022478 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:15.005575    5474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:15.006312    5474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:15.008255    5474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:15.009052    5474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:15.011873    5474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:15.005575    5474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:15.006312    5474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:15.008255    5474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:15.009052    5474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:15.011873    5474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:15.022594 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:15.022610 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:15.071291 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:15.071336 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 01:41:14.248149 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:41:16.748036 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:41:17.608065 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:17.618810 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:17.618881 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:17.643606 1685746 cri.go:96] found id: ""
	I1222 01:41:17.643633 1685746 logs.go:282] 0 containers: []
	W1222 01:41:17.643643 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:17.643650 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:17.643760 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:17.669609 1685746 cri.go:96] found id: ""
	I1222 01:41:17.669639 1685746 logs.go:282] 0 containers: []
	W1222 01:41:17.669649 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:17.669656 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:17.669725 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:17.694910 1685746 cri.go:96] found id: ""
	I1222 01:41:17.694934 1685746 logs.go:282] 0 containers: []
	W1222 01:41:17.694943 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:17.694950 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:17.695009 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:17.721067 1685746 cri.go:96] found id: ""
	I1222 01:41:17.721101 1685746 logs.go:282] 0 containers: []
	W1222 01:41:17.721111 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:17.721118 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:17.721251 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:17.762594 1685746 cri.go:96] found id: ""
	I1222 01:41:17.762669 1685746 logs.go:282] 0 containers: []
	W1222 01:41:17.762691 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:17.762715 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:17.762802 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:17.806835 1685746 cri.go:96] found id: ""
	I1222 01:41:17.806870 1685746 logs.go:282] 0 containers: []
	W1222 01:41:17.806880 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:17.806887 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:17.806964 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:17.837236 1685746 cri.go:96] found id: ""
	I1222 01:41:17.837273 1685746 logs.go:282] 0 containers: []
	W1222 01:41:17.837284 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:17.837291 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:17.837362 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:17.867730 1685746 cri.go:96] found id: ""
	I1222 01:41:17.867802 1685746 logs.go:282] 0 containers: []
	W1222 01:41:17.867825 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:17.867840 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:17.867852 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:17.927517 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:17.927555 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:17.943454 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:17.943484 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:18.012436 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:18.001362    5586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:18.002881    5586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:18.003527    5586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:18.005497    5586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:18.006107    5586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:18.001362    5586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:18.002881    5586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:18.003527    5586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:18.005497    5586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:18.006107    5586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:18.012522 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:18.012553 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:18.040219 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:18.040262 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:41:20.572279 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:20.583193 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:20.583266 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:20.609051 1685746 cri.go:96] found id: ""
	I1222 01:41:20.609075 1685746 logs.go:282] 0 containers: []
	W1222 01:41:20.609083 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:20.609089 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:20.609150 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:20.635365 1685746 cri.go:96] found id: ""
	I1222 01:41:20.635391 1685746 logs.go:282] 0 containers: []
	W1222 01:41:20.635400 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:20.635406 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:20.635470 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:20.664505 1685746 cri.go:96] found id: ""
	I1222 01:41:20.664532 1685746 logs.go:282] 0 containers: []
	W1222 01:41:20.664541 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:20.664547 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:20.664609 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:20.690863 1685746 cri.go:96] found id: ""
	I1222 01:41:20.690887 1685746 logs.go:282] 0 containers: []
	W1222 01:41:20.690904 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:20.690916 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:20.690981 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:20.716167 1685746 cri.go:96] found id: ""
	I1222 01:41:20.716188 1685746 logs.go:282] 0 containers: []
	W1222 01:41:20.716196 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:20.716203 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:20.716262 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:20.758512 1685746 cri.go:96] found id: ""
	I1222 01:41:20.758538 1685746 logs.go:282] 0 containers: []
	W1222 01:41:20.758547 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:20.758554 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:20.758612 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:20.789839 1685746 cri.go:96] found id: ""
	I1222 01:41:20.789866 1685746 logs.go:282] 0 containers: []
	W1222 01:41:20.789875 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:20.789882 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:20.789944 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:20.823216 1685746 cri.go:96] found id: ""
	I1222 01:41:20.823244 1685746 logs.go:282] 0 containers: []
	W1222 01:41:20.823254 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:20.823263 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:20.823275 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:20.878834 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:20.878873 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:20.894375 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:20.894409 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:20.963456 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:20.954882    5700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:20.955717    5700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:20.957444    5700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:20.957756    5700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:20.959270    5700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:20.954882    5700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:20.955717    5700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:20.957444    5700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:20.957756    5700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:20.959270    5700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:20.963479 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:20.963518 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:20.992875 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:20.992916 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 01:41:18.748733 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:41:21.248234 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:41:23.526237 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:23.540126 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:23.540244 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:23.567806 1685746 cri.go:96] found id: ""
	I1222 01:41:23.567833 1685746 logs.go:282] 0 containers: []
	W1222 01:41:23.567842 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:23.567849 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:23.567915 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:23.594496 1685746 cri.go:96] found id: ""
	I1222 01:41:23.594525 1685746 logs.go:282] 0 containers: []
	W1222 01:41:23.594538 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:23.594546 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:23.594614 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:23.621007 1685746 cri.go:96] found id: ""
	I1222 01:41:23.621034 1685746 logs.go:282] 0 containers: []
	W1222 01:41:23.621043 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:23.621050 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:23.621111 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:23.646829 1685746 cri.go:96] found id: ""
	I1222 01:41:23.646857 1685746 logs.go:282] 0 containers: []
	W1222 01:41:23.646867 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:23.646874 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:23.646941 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:23.672993 1685746 cri.go:96] found id: ""
	I1222 01:41:23.673020 1685746 logs.go:282] 0 containers: []
	W1222 01:41:23.673030 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:23.673036 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:23.673099 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:23.704873 1685746 cri.go:96] found id: ""
	I1222 01:41:23.704901 1685746 logs.go:282] 0 containers: []
	W1222 01:41:23.704910 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:23.704916 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:23.704980 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:23.731220 1685746 cri.go:96] found id: ""
	I1222 01:41:23.731248 1685746 logs.go:282] 0 containers: []
	W1222 01:41:23.731259 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:23.731265 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:23.731330 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:23.769641 1685746 cri.go:96] found id: ""
	I1222 01:41:23.769669 1685746 logs.go:282] 0 containers: []
	W1222 01:41:23.769678 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:23.769687 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:23.769701 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:41:23.811900 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:23.811928 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:23.870851 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:23.870887 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:23.886411 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:23.886488 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:23.954566 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:23.945665    5826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:23.946254    5826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:23.948052    5826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:23.948665    5826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:23.950507    5826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:23.945665    5826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:23.946254    5826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:23.948052    5826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:23.948665    5826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:23.950507    5826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:23.954588 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:23.954602 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:26.483766 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:26.495024 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:26.495100 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:26.521679 1685746 cri.go:96] found id: ""
	I1222 01:41:26.521706 1685746 logs.go:282] 0 containers: []
	W1222 01:41:26.521716 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:26.521723 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:26.521786 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:26.552746 1685746 cri.go:96] found id: ""
	I1222 01:41:26.552773 1685746 logs.go:282] 0 containers: []
	W1222 01:41:26.552782 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:26.552789 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:26.552856 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:26.580045 1685746 cri.go:96] found id: ""
	I1222 01:41:26.580072 1685746 logs.go:282] 0 containers: []
	W1222 01:41:26.580082 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:26.580088 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:26.580151 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:26.606656 1685746 cri.go:96] found id: ""
	I1222 01:41:26.606683 1685746 logs.go:282] 0 containers: []
	W1222 01:41:26.606693 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:26.606700 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:26.606759 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:26.632499 1685746 cri.go:96] found id: ""
	I1222 01:41:26.632539 1685746 logs.go:282] 0 containers: []
	W1222 01:41:26.632548 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:26.632556 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:26.632640 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:26.664045 1685746 cri.go:96] found id: ""
	I1222 01:41:26.664072 1685746 logs.go:282] 0 containers: []
	W1222 01:41:26.664082 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:26.664089 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:26.664172 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	W1222 01:41:23.248384 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:41:25.748529 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:41:27.748967 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:41:26.689648 1685746 cri.go:96] found id: ""
	I1222 01:41:26.689672 1685746 logs.go:282] 0 containers: []
	W1222 01:41:26.689693 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:26.689704 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:26.689772 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:26.715926 1685746 cri.go:96] found id: ""
	I1222 01:41:26.715949 1685746 logs.go:282] 0 containers: []
	W1222 01:41:26.715958 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:26.715966 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:26.715977 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:26.779696 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:26.779785 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:26.802335 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:26.802412 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:26.866575 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:26.857663    5928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:26.858479    5928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:26.860256    5928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:26.860898    5928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:26.862675    5928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:26.857663    5928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:26.858479    5928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:26.860256    5928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:26.860898    5928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:26.862675    5928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:26.866599 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:26.866613 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:26.893136 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:26.893176 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:41:29.425895 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:29.438488 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:29.438569 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:29.467384 1685746 cri.go:96] found id: ""
	I1222 01:41:29.467415 1685746 logs.go:282] 0 containers: []
	W1222 01:41:29.467426 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:29.467432 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:29.467497 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:29.502253 1685746 cri.go:96] found id: ""
	I1222 01:41:29.502277 1685746 logs.go:282] 0 containers: []
	W1222 01:41:29.502285 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:29.502291 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:29.502351 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:29.538703 1685746 cri.go:96] found id: ""
	I1222 01:41:29.538730 1685746 logs.go:282] 0 containers: []
	W1222 01:41:29.538739 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:29.538747 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:29.538809 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:29.567395 1685746 cri.go:96] found id: ""
	I1222 01:41:29.567422 1685746 logs.go:282] 0 containers: []
	W1222 01:41:29.567431 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:29.567439 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:29.567500 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:29.595415 1685746 cri.go:96] found id: ""
	I1222 01:41:29.595493 1685746 logs.go:282] 0 containers: []
	W1222 01:41:29.595508 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:29.595516 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:29.595583 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:29.622583 1685746 cri.go:96] found id: ""
	I1222 01:41:29.622611 1685746 logs.go:282] 0 containers: []
	W1222 01:41:29.622620 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:29.622627 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:29.622693 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:29.649130 1685746 cri.go:96] found id: ""
	I1222 01:41:29.649156 1685746 logs.go:282] 0 containers: []
	W1222 01:41:29.649166 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:29.649173 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:29.649240 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:29.676205 1685746 cri.go:96] found id: ""
	I1222 01:41:29.676231 1685746 logs.go:282] 0 containers: []
	W1222 01:41:29.676240 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:29.676250 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:29.676279 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:29.731980 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:29.732016 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:29.747474 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:29.747503 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:29.833319 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:29.822964    6042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:29.824836    6042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:29.825723    6042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:29.827546    6042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:29.829272    6042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:29.822964    6042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:29.824836    6042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:29.825723    6042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:29.827546    6042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:29.829272    6042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:29.833342 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:29.833355 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:29.859398 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:29.859432 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 01:41:30.247999 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:41:32.248426 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:41:32.387755 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:32.398548 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:32.398639 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:32.422848 1685746 cri.go:96] found id: ""
	I1222 01:41:32.422870 1685746 logs.go:282] 0 containers: []
	W1222 01:41:32.422879 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:32.422885 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:32.422976 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:32.448126 1685746 cri.go:96] found id: ""
	I1222 01:41:32.448153 1685746 logs.go:282] 0 containers: []
	W1222 01:41:32.448162 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:32.448171 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:32.448233 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:32.476732 1685746 cri.go:96] found id: ""
	I1222 01:41:32.476769 1685746 logs.go:282] 0 containers: []
	W1222 01:41:32.476779 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:32.476785 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:32.476856 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:32.521856 1685746 cri.go:96] found id: ""
	I1222 01:41:32.521885 1685746 logs.go:282] 0 containers: []
	W1222 01:41:32.521915 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:32.521923 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:32.522010 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:32.559083 1685746 cri.go:96] found id: ""
	I1222 01:41:32.559112 1685746 logs.go:282] 0 containers: []
	W1222 01:41:32.559121 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:32.559128 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:32.559199 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:32.585037 1685746 cri.go:96] found id: ""
	I1222 01:41:32.585066 1685746 logs.go:282] 0 containers: []
	W1222 01:41:32.585076 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:32.585082 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:32.585142 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:32.611094 1685746 cri.go:96] found id: ""
	I1222 01:41:32.611117 1685746 logs.go:282] 0 containers: []
	W1222 01:41:32.611126 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:32.611132 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:32.611200 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:32.636572 1685746 cri.go:96] found id: ""
	I1222 01:41:32.636598 1685746 logs.go:282] 0 containers: []
	W1222 01:41:32.636606 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:32.636614 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:32.636626 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:32.691721 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:32.691756 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:32.706757 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:32.706791 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:32.784203 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:32.775781    6150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:32.776686    6150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:32.778481    6150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:32.778815    6150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:32.780276    6150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:32.775781    6150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:32.776686    6150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:32.778481    6150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:32.778815    6150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:32.780276    6150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:32.784277 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:32.784302 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:32.812067 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:32.812099 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:41:35.344181 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:35.354549 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:35.354621 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:35.378138 1685746 cri.go:96] found id: ""
	I1222 01:41:35.378160 1685746 logs.go:282] 0 containers: []
	W1222 01:41:35.378169 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:35.378177 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:35.378236 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:35.403725 1685746 cri.go:96] found id: ""
	I1222 01:41:35.403748 1685746 logs.go:282] 0 containers: []
	W1222 01:41:35.403757 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:35.403764 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:35.403825 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:35.429025 1685746 cri.go:96] found id: ""
	I1222 01:41:35.429050 1685746 logs.go:282] 0 containers: []
	W1222 01:41:35.429059 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:35.429066 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:35.429129 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:35.459607 1685746 cri.go:96] found id: ""
	I1222 01:41:35.459633 1685746 logs.go:282] 0 containers: []
	W1222 01:41:35.459642 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:35.459649 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:35.459707 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:35.483992 1685746 cri.go:96] found id: ""
	I1222 01:41:35.484015 1685746 logs.go:282] 0 containers: []
	W1222 01:41:35.484024 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:35.484031 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:35.484094 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:35.517254 1685746 cri.go:96] found id: ""
	I1222 01:41:35.517277 1685746 logs.go:282] 0 containers: []
	W1222 01:41:35.517286 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:35.517293 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:35.517353 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:35.546137 1685746 cri.go:96] found id: ""
	I1222 01:41:35.546219 1685746 logs.go:282] 0 containers: []
	W1222 01:41:35.546242 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:35.546284 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:35.546378 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:35.576307 1685746 cri.go:96] found id: ""
	I1222 01:41:35.576329 1685746 logs.go:282] 0 containers: []
	W1222 01:41:35.576338 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:35.576347 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:35.576358 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:35.631853 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:35.631887 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:35.646787 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:35.646827 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:35.713895 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:35.705676    6260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:35.706509    6260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:35.708048    6260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:35.708614    6260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:35.710294    6260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:35.705676    6260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:35.706509    6260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:35.708048    6260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:35.708614    6260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:35.710294    6260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:35.713927 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:35.713943 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:35.739168 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:35.739250 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 01:41:34.248875 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:41:36.748177 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:41:38.278358 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:38.289460 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:38.289534 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:38.316292 1685746 cri.go:96] found id: ""
	I1222 01:41:38.316320 1685746 logs.go:282] 0 containers: []
	W1222 01:41:38.316329 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:38.316336 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:38.316416 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:38.344932 1685746 cri.go:96] found id: ""
	I1222 01:41:38.344960 1685746 logs.go:282] 0 containers: []
	W1222 01:41:38.344969 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:38.344976 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:38.345038 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:38.371484 1685746 cri.go:96] found id: ""
	I1222 01:41:38.371509 1685746 logs.go:282] 0 containers: []
	W1222 01:41:38.371519 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:38.371525 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:38.371594 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:38.401114 1685746 cri.go:96] found id: ""
	I1222 01:41:38.401140 1685746 logs.go:282] 0 containers: []
	W1222 01:41:38.401149 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:38.401157 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:38.401217 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:38.427857 1685746 cri.go:96] found id: ""
	I1222 01:41:38.427881 1685746 logs.go:282] 0 containers: []
	W1222 01:41:38.427890 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:38.427897 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:38.427962 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:38.453333 1685746 cri.go:96] found id: ""
	I1222 01:41:38.453358 1685746 logs.go:282] 0 containers: []
	W1222 01:41:38.453367 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:38.453374 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:38.453455 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:38.477527 1685746 cri.go:96] found id: ""
	I1222 01:41:38.477610 1685746 logs.go:282] 0 containers: []
	W1222 01:41:38.477633 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:38.477655 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:38.477748 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:38.523741 1685746 cri.go:96] found id: ""
	I1222 01:41:38.523763 1685746 logs.go:282] 0 containers: []
	W1222 01:41:38.523772 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:38.523787 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:38.523798 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:38.595469 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:38.587236    6363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:38.587861    6363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:38.589425    6363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:38.589886    6363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:38.591514    6363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:38.587236    6363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:38.587861    6363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:38.589425    6363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:38.589886    6363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:38.591514    6363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:38.595491 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:38.595508 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:38.621769 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:38.621808 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:41:38.651477 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:38.651507 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:38.710896 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:38.710934 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:41.227040 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:41.237881 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:41.237954 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:41.265636 1685746 cri.go:96] found id: ""
	I1222 01:41:41.265671 1685746 logs.go:282] 0 containers: []
	W1222 01:41:41.265680 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:41.265687 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:41.265757 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:41.291304 1685746 cri.go:96] found id: ""
	I1222 01:41:41.291330 1685746 logs.go:282] 0 containers: []
	W1222 01:41:41.291339 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:41.291346 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:41.291414 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:41.316968 1685746 cri.go:96] found id: ""
	I1222 01:41:41.317003 1685746 logs.go:282] 0 containers: []
	W1222 01:41:41.317013 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:41.317020 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:41.317094 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:41.342750 1685746 cri.go:96] found id: ""
	I1222 01:41:41.342779 1685746 logs.go:282] 0 containers: []
	W1222 01:41:41.342794 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:41.342801 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:41.342865 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:41.368173 1685746 cri.go:96] found id: ""
	I1222 01:41:41.368197 1685746 logs.go:282] 0 containers: []
	W1222 01:41:41.368205 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:41.368212 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:41.368275 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:41.396263 1685746 cri.go:96] found id: ""
	I1222 01:41:41.396290 1685746 logs.go:282] 0 containers: []
	W1222 01:41:41.396300 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:41.396308 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:41.396380 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:41.424002 1685746 cri.go:96] found id: ""
	I1222 01:41:41.424028 1685746 logs.go:282] 0 containers: []
	W1222 01:41:41.424037 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:41.424044 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:41.424104 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:41.450858 1685746 cri.go:96] found id: ""
	I1222 01:41:41.450886 1685746 logs.go:282] 0 containers: []
	W1222 01:41:41.450894 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:41.450904 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:41.450915 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:41.510703 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:41.510785 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:41.529398 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:41.529475 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:41.596968 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:41.588901    6483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:41.589540    6483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:41.591264    6483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:41.591828    6483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:41.593389    6483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:41.588901    6483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:41.589540    6483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:41.591264    6483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:41.591828    6483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:41.593389    6483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:41.596989 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:41.597002 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:41.623436 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:41.623472 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 01:41:39.248106 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:41:41.748067 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:41:44.153585 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:44.164792 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:44.164865 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:44.190259 1685746 cri.go:96] found id: ""
	I1222 01:41:44.190282 1685746 logs.go:282] 0 containers: []
	W1222 01:41:44.190290 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:44.190297 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:44.190357 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:44.223886 1685746 cri.go:96] found id: ""
	I1222 01:41:44.223911 1685746 logs.go:282] 0 containers: []
	W1222 01:41:44.223922 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:44.223929 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:44.223988 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:44.249898 1685746 cri.go:96] found id: ""
	I1222 01:41:44.249922 1685746 logs.go:282] 0 containers: []
	W1222 01:41:44.249931 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:44.249948 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:44.250010 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:44.275190 1685746 cri.go:96] found id: ""
	I1222 01:41:44.275217 1685746 logs.go:282] 0 containers: []
	W1222 01:41:44.275227 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:44.275233 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:44.275325 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:44.301198 1685746 cri.go:96] found id: ""
	I1222 01:41:44.301221 1685746 logs.go:282] 0 containers: []
	W1222 01:41:44.301230 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:44.301237 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:44.301311 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:44.325952 1685746 cri.go:96] found id: ""
	I1222 01:41:44.325990 1685746 logs.go:282] 0 containers: []
	W1222 01:41:44.326000 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:44.326023 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:44.326154 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:44.352189 1685746 cri.go:96] found id: ""
	I1222 01:41:44.352227 1685746 logs.go:282] 0 containers: []
	W1222 01:41:44.352236 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:44.352259 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:44.352334 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:44.377820 1685746 cri.go:96] found id: ""
	I1222 01:41:44.377848 1685746 logs.go:282] 0 containers: []
	W1222 01:41:44.377858 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:44.377868 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:44.377879 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:44.393230 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:44.393258 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:44.463151 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:44.454750    6594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:44.455259    6594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:44.456851    6594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:44.457499    6594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:44.458487    6594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:44.454750    6594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:44.455259    6594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:44.456851    6594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:44.457499    6594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:44.458487    6594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:44.463175 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:44.463188 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:44.488611 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:44.488690 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:41:44.523935 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:44.524011 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1222 01:41:44.248599 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:41:46.748094 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:41:47.091277 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:47.102299 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:47.102374 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:47.128309 1685746 cri.go:96] found id: ""
	I1222 01:41:47.128334 1685746 logs.go:282] 0 containers: []
	W1222 01:41:47.128344 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:47.128351 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:47.128431 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:47.154429 1685746 cri.go:96] found id: ""
	I1222 01:41:47.154456 1685746 logs.go:282] 0 containers: []
	W1222 01:41:47.154465 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:47.154473 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:47.154535 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:47.179829 1685746 cri.go:96] found id: ""
	I1222 01:41:47.179856 1685746 logs.go:282] 0 containers: []
	W1222 01:41:47.179865 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:47.179872 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:47.179933 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:47.204965 1685746 cri.go:96] found id: ""
	I1222 01:41:47.204999 1685746 logs.go:282] 0 containers: []
	W1222 01:41:47.205009 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:47.205016 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:47.205088 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:47.231912 1685746 cri.go:96] found id: ""
	I1222 01:41:47.231939 1685746 logs.go:282] 0 containers: []
	W1222 01:41:47.231949 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:47.231955 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:47.232043 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:47.262187 1685746 cri.go:96] found id: ""
	I1222 01:41:47.262215 1685746 logs.go:282] 0 containers: []
	W1222 01:41:47.262230 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:47.262237 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:47.262301 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:47.287536 1685746 cri.go:96] found id: ""
	I1222 01:41:47.287567 1685746 logs.go:282] 0 containers: []
	W1222 01:41:47.287577 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:47.287583 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:47.287648 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:47.313516 1685746 cri.go:96] found id: ""
	I1222 01:41:47.313544 1685746 logs.go:282] 0 containers: []
	W1222 01:41:47.313553 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:47.313563 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:47.313573 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:47.369295 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:47.369329 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:47.387169 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:47.387197 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:47.455311 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:47.445096    6709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:47.445837    6709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:47.447629    6709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:47.448117    6709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:47.449732    6709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:47.445096    6709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:47.445837    6709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:47.447629    6709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:47.448117    6709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:47.449732    6709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:47.455335 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:47.455347 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:47.481041 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:47.481078 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:41:50.030868 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:50.043616 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:50.043692 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:50.072180 1685746 cri.go:96] found id: ""
	I1222 01:41:50.072210 1685746 logs.go:282] 0 containers: []
	W1222 01:41:50.072220 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:50.072229 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:50.072297 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:50.100979 1685746 cri.go:96] found id: ""
	I1222 01:41:50.101005 1685746 logs.go:282] 0 containers: []
	W1222 01:41:50.101014 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:50.101021 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:50.101091 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:50.128360 1685746 cri.go:96] found id: ""
	I1222 01:41:50.128392 1685746 logs.go:282] 0 containers: []
	W1222 01:41:50.128404 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:50.128411 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:50.128476 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:50.154912 1685746 cri.go:96] found id: ""
	I1222 01:41:50.154945 1685746 logs.go:282] 0 containers: []
	W1222 01:41:50.154955 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:50.154963 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:50.155033 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:50.181433 1685746 cri.go:96] found id: ""
	I1222 01:41:50.181465 1685746 logs.go:282] 0 containers: []
	W1222 01:41:50.181474 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:50.181483 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:50.181553 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:50.207260 1685746 cri.go:96] found id: ""
	I1222 01:41:50.207289 1685746 logs.go:282] 0 containers: []
	W1222 01:41:50.207299 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:50.207305 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:50.207366 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:50.234601 1685746 cri.go:96] found id: ""
	I1222 01:41:50.234649 1685746 logs.go:282] 0 containers: []
	W1222 01:41:50.234659 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:50.234666 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:50.234744 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:50.264579 1685746 cri.go:96] found id: ""
	I1222 01:41:50.264621 1685746 logs.go:282] 0 containers: []
	W1222 01:41:50.264631 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:50.264641 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:50.264661 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:50.321078 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:50.321112 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:50.336044 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:50.336069 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:50.401373 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:50.392422    6821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:50.393059    6821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:50.394644    6821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:50.395244    6821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:50.396998    6821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:50.392422    6821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:50.393059    6821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:50.394644    6821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:50.395244    6821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:50.396998    6821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:50.401396 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:50.401410 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:50.428108 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:50.428151 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 01:41:48.749155 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:41:51.248977 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:41:52.958393 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:52.969793 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:52.969867 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:53.021307 1685746 cri.go:96] found id: ""
	I1222 01:41:53.021331 1685746 logs.go:282] 0 containers: []
	W1222 01:41:53.021340 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:53.021352 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:53.021415 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:53.053765 1685746 cri.go:96] found id: ""
	I1222 01:41:53.053789 1685746 logs.go:282] 0 containers: []
	W1222 01:41:53.053798 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:53.053804 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:53.053872 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:53.079107 1685746 cri.go:96] found id: ""
	I1222 01:41:53.079135 1685746 logs.go:282] 0 containers: []
	W1222 01:41:53.079144 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:53.079152 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:53.079214 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:53.106101 1685746 cri.go:96] found id: ""
	I1222 01:41:53.106130 1685746 logs.go:282] 0 containers: []
	W1222 01:41:53.106138 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:53.106145 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:53.106209 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:53.135616 1685746 cri.go:96] found id: ""
	I1222 01:41:53.135643 1685746 logs.go:282] 0 containers: []
	W1222 01:41:53.135652 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:53.135659 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:53.135766 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:53.160318 1685746 cri.go:96] found id: ""
	I1222 01:41:53.160344 1685746 logs.go:282] 0 containers: []
	W1222 01:41:53.160353 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:53.160360 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:53.160451 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:53.185257 1685746 cri.go:96] found id: ""
	I1222 01:41:53.185297 1685746 logs.go:282] 0 containers: []
	W1222 01:41:53.185306 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:53.185313 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:53.185401 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:53.210753 1685746 cri.go:96] found id: ""
	I1222 01:41:53.210824 1685746 logs.go:282] 0 containers: []
	W1222 01:41:53.210839 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:53.210855 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:53.210867 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:53.237290 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:53.237323 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:41:53.267342 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:53.267374 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:53.323394 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:53.323429 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:53.339435 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:53.339465 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:53.403286 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:53.395297    6950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:53.395794    6950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:53.397429    6950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:53.397875    6950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:53.399414    6950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:53.395297    6950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:53.395794    6950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:53.397429    6950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:53.397875    6950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:53.399414    6950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:55.903619 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:55.914760 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:55.914836 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:55.939507 1685746 cri.go:96] found id: ""
	I1222 01:41:55.939532 1685746 logs.go:282] 0 containers: []
	W1222 01:41:55.939541 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:55.939548 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:55.939614 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:55.965607 1685746 cri.go:96] found id: ""
	I1222 01:41:55.965633 1685746 logs.go:282] 0 containers: []
	W1222 01:41:55.965643 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:55.965649 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:55.965715 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:56.006138 1685746 cri.go:96] found id: ""
	I1222 01:41:56.006171 1685746 logs.go:282] 0 containers: []
	W1222 01:41:56.006181 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:56.006188 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:56.006256 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:56.040087 1685746 cri.go:96] found id: ""
	I1222 01:41:56.040116 1685746 logs.go:282] 0 containers: []
	W1222 01:41:56.040125 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:56.040131 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:56.040191 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:56.068695 1685746 cri.go:96] found id: ""
	I1222 01:41:56.068719 1685746 logs.go:282] 0 containers: []
	W1222 01:41:56.068727 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:56.068734 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:56.068795 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:56.096726 1685746 cri.go:96] found id: ""
	I1222 01:41:56.096808 1685746 logs.go:282] 0 containers: []
	W1222 01:41:56.096832 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:56.096854 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:56.096963 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:56.125548 1685746 cri.go:96] found id: ""
	I1222 01:41:56.125627 1685746 logs.go:282] 0 containers: []
	W1222 01:41:56.125652 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:56.125675 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:56.125763 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:56.150956 1685746 cri.go:96] found id: ""
	I1222 01:41:56.150986 1685746 logs.go:282] 0 containers: []
	W1222 01:41:56.150995 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:56.151005 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:56.151049 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:56.216560 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:56.208477    7045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:56.209149    7045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:56.210844    7045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:56.211410    7045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:56.212923    7045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:56.208477    7045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:56.209149    7045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:56.210844    7045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:56.211410    7045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:56.212923    7045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:56.216581 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:56.216594 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:56.242334 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:56.242368 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:41:56.270763 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:56.270793 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:56.325996 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:56.326038 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1222 01:41:53.748987 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:41:56.248859 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:41:58.841618 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:58.852321 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:58.852411 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:58.877439 1685746 cri.go:96] found id: ""
	I1222 01:41:58.877465 1685746 logs.go:282] 0 containers: []
	W1222 01:41:58.877475 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:58.877482 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:58.877542 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:58.902343 1685746 cri.go:96] found id: ""
	I1222 01:41:58.902369 1685746 logs.go:282] 0 containers: []
	W1222 01:41:58.902378 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:58.902385 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:58.902443 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:58.927733 1685746 cri.go:96] found id: ""
	I1222 01:41:58.927758 1685746 logs.go:282] 0 containers: []
	W1222 01:41:58.927767 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:58.927774 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:58.927834 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:58.954349 1685746 cri.go:96] found id: ""
	I1222 01:41:58.954374 1685746 logs.go:282] 0 containers: []
	W1222 01:41:58.954384 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:58.954391 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:58.954464 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:58.984449 1685746 cri.go:96] found id: ""
	I1222 01:41:58.984519 1685746 logs.go:282] 0 containers: []
	W1222 01:41:58.984533 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:58.984541 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:58.984612 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:59.020245 1685746 cri.go:96] found id: ""
	I1222 01:41:59.020277 1685746 logs.go:282] 0 containers: []
	W1222 01:41:59.020294 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:59.020303 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:59.020387 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:59.059067 1685746 cri.go:96] found id: ""
	I1222 01:41:59.059135 1685746 logs.go:282] 0 containers: []
	W1222 01:41:59.059157 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:59.059170 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:59.059244 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:59.090327 1685746 cri.go:96] found id: ""
	I1222 01:41:59.090355 1685746 logs.go:282] 0 containers: []
	W1222 01:41:59.090364 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:59.090372 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:59.090384 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:59.149768 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:59.149809 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:59.164825 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:59.164857 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:59.232698 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:59.223745    7163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:59.224279    7163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:59.226992    7163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:59.227434    7163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:59.228967    7163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:59.223745    7163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:59.224279    7163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:59.226992    7163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:59.227434    7163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:59.228967    7163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:59.232720 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:59.232734 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:59.258805 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:59.258840 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 01:41:58.748026 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:42:00.748292 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:42:01.787611 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:42:01.799088 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:42:01.799206 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:42:01.829442 1685746 cri.go:96] found id: ""
	I1222 01:42:01.829521 1685746 logs.go:282] 0 containers: []
	W1222 01:42:01.829543 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:42:01.829566 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:42:01.829657 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:42:01.856095 1685746 cri.go:96] found id: ""
	I1222 01:42:01.856122 1685746 logs.go:282] 0 containers: []
	W1222 01:42:01.856132 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:42:01.856139 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:42:01.856203 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:42:01.882443 1685746 cri.go:96] found id: ""
	I1222 01:42:01.882469 1685746 logs.go:282] 0 containers: []
	W1222 01:42:01.882478 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:42:01.882485 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:42:01.882549 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:42:01.908008 1685746 cri.go:96] found id: ""
	I1222 01:42:01.908033 1685746 logs.go:282] 0 containers: []
	W1222 01:42:01.908043 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:42:01.908049 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:42:01.908111 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:42:01.934350 1685746 cri.go:96] found id: ""
	I1222 01:42:01.934377 1685746 logs.go:282] 0 containers: []
	W1222 01:42:01.934386 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:42:01.934393 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:42:01.934457 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:42:01.960407 1685746 cri.go:96] found id: ""
	I1222 01:42:01.960433 1685746 logs.go:282] 0 containers: []
	W1222 01:42:01.960442 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:42:01.960449 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:42:01.960512 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:42:01.988879 1685746 cri.go:96] found id: ""
	I1222 01:42:01.988915 1685746 logs.go:282] 0 containers: []
	W1222 01:42:01.988925 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:42:01.988931 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:42:01.989000 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:42:02.021404 1685746 cri.go:96] found id: ""
	I1222 01:42:02.021444 1685746 logs.go:282] 0 containers: []
	W1222 01:42:02.021454 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:42:02.021464 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:42:02.021476 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:42:02.053252 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:42:02.053282 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:42:02.111509 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:42:02.111548 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:42:02.127002 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:42:02.127081 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:42:02.196408 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:42:02.188126    7282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:02.188871    7282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:02.190476    7282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:02.190840    7282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:02.192032    7282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:42:02.188126    7282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:02.188871    7282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:02.190476    7282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:02.190840    7282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:02.192032    7282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:42:02.196429 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:42:02.196442 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:42:04.723107 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:42:04.734699 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:42:04.734786 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:42:04.771439 1685746 cri.go:96] found id: ""
	I1222 01:42:04.771462 1685746 logs.go:282] 0 containers: []
	W1222 01:42:04.771471 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:42:04.771477 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:42:04.771540 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:42:04.806612 1685746 cri.go:96] found id: ""
	I1222 01:42:04.806639 1685746 logs.go:282] 0 containers: []
	W1222 01:42:04.806648 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:42:04.806655 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:42:04.806714 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:42:04.832290 1685746 cri.go:96] found id: ""
	I1222 01:42:04.832320 1685746 logs.go:282] 0 containers: []
	W1222 01:42:04.832329 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:42:04.832336 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:42:04.832404 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:42:04.860422 1685746 cri.go:96] found id: ""
	I1222 01:42:04.860460 1685746 logs.go:282] 0 containers: []
	W1222 01:42:04.860469 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:42:04.860494 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:42:04.860603 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:42:04.885397 1685746 cri.go:96] found id: ""
	I1222 01:42:04.885424 1685746 logs.go:282] 0 containers: []
	W1222 01:42:04.885433 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:42:04.885440 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:42:04.885524 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:42:04.910499 1685746 cri.go:96] found id: ""
	I1222 01:42:04.910529 1685746 logs.go:282] 0 containers: []
	W1222 01:42:04.910539 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:42:04.910546 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:42:04.910607 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:42:04.934849 1685746 cri.go:96] found id: ""
	I1222 01:42:04.934887 1685746 logs.go:282] 0 containers: []
	W1222 01:42:04.934897 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:42:04.934921 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:42:04.935013 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:42:04.964384 1685746 cri.go:96] found id: ""
	I1222 01:42:04.964411 1685746 logs.go:282] 0 containers: []
	W1222 01:42:04.964420 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:42:04.964429 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:42:04.964460 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:42:05.023249 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:42:05.023347 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:42:05.042677 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:42:05.042702 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:42:05.113125 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:42:05.104084    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:05.104668    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:05.106408    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:05.106980    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:05.108789    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:42:05.104084    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:05.104668    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:05.106408    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:05.106980    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:05.108789    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:42:05.113151 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:42:05.113167 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:42:05.139072 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:42:05.139109 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 01:42:03.248327 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:42:05.748676 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:42:07.672253 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:42:07.683433 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:42:07.683523 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:42:07.710000 1685746 cri.go:96] found id: ""
	I1222 01:42:07.710025 1685746 logs.go:282] 0 containers: []
	W1222 01:42:07.710033 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:42:07.710040 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:42:07.710129 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:42:07.749657 1685746 cri.go:96] found id: ""
	I1222 01:42:07.749685 1685746 logs.go:282] 0 containers: []
	W1222 01:42:07.749695 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:42:07.749702 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:42:07.749769 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:42:07.779817 1685746 cri.go:96] found id: ""
	I1222 01:42:07.779844 1685746 logs.go:282] 0 containers: []
	W1222 01:42:07.779853 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:42:07.779860 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:42:07.779920 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:42:07.809501 1685746 cri.go:96] found id: ""
	I1222 01:42:07.809529 1685746 logs.go:282] 0 containers: []
	W1222 01:42:07.809538 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:42:07.809546 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:42:07.809606 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:42:07.834291 1685746 cri.go:96] found id: ""
	I1222 01:42:07.834318 1685746 logs.go:282] 0 containers: []
	W1222 01:42:07.834327 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:42:07.834334 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:42:07.834395 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:42:07.859724 1685746 cri.go:96] found id: ""
	I1222 01:42:07.859791 1685746 logs.go:282] 0 containers: []
	W1222 01:42:07.859807 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:42:07.859814 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:42:07.859874 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:42:07.891259 1685746 cri.go:96] found id: ""
	I1222 01:42:07.891287 1685746 logs.go:282] 0 containers: []
	W1222 01:42:07.891296 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:42:07.891303 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:42:07.891362 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:42:07.916371 1685746 cri.go:96] found id: ""
	I1222 01:42:07.916451 1685746 logs.go:282] 0 containers: []
	W1222 01:42:07.916467 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:42:07.916477 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:42:07.916489 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:42:07.943955 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:42:07.943981 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:42:08.000957 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:42:08.001003 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:42:08.021265 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:42:08.021299 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:42:08.098699 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:42:08.089767    7510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:08.090657    7510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:08.092419    7510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:08.093123    7510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:08.094829    7510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:42:08.089767    7510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:08.090657    7510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:08.092419    7510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:08.093123    7510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:08.094829    7510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:42:08.098725 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:42:08.098739 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:42:10.625986 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:42:10.637185 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:42:10.637275 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:42:10.663011 1685746 cri.go:96] found id: ""
	I1222 01:42:10.663039 1685746 logs.go:282] 0 containers: []
	W1222 01:42:10.663048 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:42:10.663055 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:42:10.663121 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:42:10.689593 1685746 cri.go:96] found id: ""
	I1222 01:42:10.689623 1685746 logs.go:282] 0 containers: []
	W1222 01:42:10.689633 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:42:10.689639 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:42:10.689704 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:42:10.718520 1685746 cri.go:96] found id: ""
	I1222 01:42:10.718545 1685746 logs.go:282] 0 containers: []
	W1222 01:42:10.718554 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:42:10.718561 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:42:10.718627 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:42:10.748796 1685746 cri.go:96] found id: ""
	I1222 01:42:10.748829 1685746 logs.go:282] 0 containers: []
	W1222 01:42:10.748839 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:42:10.748846 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:42:10.748919 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:42:10.780456 1685746 cri.go:96] found id: ""
	I1222 01:42:10.780493 1685746 logs.go:282] 0 containers: []
	W1222 01:42:10.780508 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:42:10.780515 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:42:10.780591 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:42:10.810196 1685746 cri.go:96] found id: ""
	I1222 01:42:10.810234 1685746 logs.go:282] 0 containers: []
	W1222 01:42:10.810243 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:42:10.810250 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:42:10.810346 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:42:10.836475 1685746 cri.go:96] found id: ""
	I1222 01:42:10.836502 1685746 logs.go:282] 0 containers: []
	W1222 01:42:10.836511 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:42:10.836518 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:42:10.836582 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:42:10.862222 1685746 cri.go:96] found id: ""
	I1222 01:42:10.862246 1685746 logs.go:282] 0 containers: []
	W1222 01:42:10.862255 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:42:10.862264 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:42:10.862275 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:42:10.918613 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:42:10.918648 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:42:10.933449 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:42:10.933478 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:42:11.013628 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:42:11.003467    7604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:11.004191    7604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:11.006270    7604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:11.007253    7604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:11.009218    7604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:42:11.003467    7604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:11.004191    7604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:11.006270    7604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:11.007253    7604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:11.009218    7604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:42:11.013706 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:42:11.013738 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:42:11.042713 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:42:11.042803 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 01:42:08.248287 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:42:10.748100 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:42:12.748911 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:42:13.581897 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:42:13.592897 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:42:13.592969 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:42:13.621158 1685746 cri.go:96] found id: ""
	I1222 01:42:13.621184 1685746 logs.go:282] 0 containers: []
	W1222 01:42:13.621194 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:42:13.621200 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:42:13.621265 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:42:13.646742 1685746 cri.go:96] found id: ""
	I1222 01:42:13.646769 1685746 logs.go:282] 0 containers: []
	W1222 01:42:13.646778 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:42:13.646784 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:42:13.646843 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:42:13.671981 1685746 cri.go:96] found id: ""
	I1222 01:42:13.672014 1685746 logs.go:282] 0 containers: []
	W1222 01:42:13.672023 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:42:13.672030 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:42:13.672093 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:42:13.697359 1685746 cri.go:96] found id: ""
	I1222 01:42:13.697387 1685746 logs.go:282] 0 containers: []
	W1222 01:42:13.697397 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:42:13.697408 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:42:13.697471 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:42:13.723455 1685746 cri.go:96] found id: ""
	I1222 01:42:13.723481 1685746 logs.go:282] 0 containers: []
	W1222 01:42:13.723491 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:42:13.723499 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:42:13.723560 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:42:13.762227 1685746 cri.go:96] found id: ""
	I1222 01:42:13.762251 1685746 logs.go:282] 0 containers: []
	W1222 01:42:13.762259 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:42:13.762266 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:42:13.762325 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:42:13.792416 1685746 cri.go:96] found id: ""
	I1222 01:42:13.792440 1685746 logs.go:282] 0 containers: []
	W1222 01:42:13.792448 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:42:13.792455 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:42:13.792521 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:42:13.824151 1685746 cri.go:96] found id: ""
	I1222 01:42:13.824178 1685746 logs.go:282] 0 containers: []
	W1222 01:42:13.824188 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:42:13.824227 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:42:13.824251 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:42:13.839610 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:42:13.839639 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:42:13.903103 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:42:13.894591    7715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:13.895392    7715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:13.896942    7715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:13.897426    7715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:13.898945    7715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:42:13.894591    7715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:13.895392    7715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:13.896942    7715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:13.897426    7715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:13.898945    7715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:42:13.903125 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:42:13.903138 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:42:13.928958 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:42:13.928992 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:42:13.959685 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:42:13.959714 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:42:16.518219 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:42:16.529223 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:42:16.529294 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:42:16.555927 1685746 cri.go:96] found id: ""
	I1222 01:42:16.555953 1685746 logs.go:282] 0 containers: []
	W1222 01:42:16.555962 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:42:16.555969 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:42:16.556028 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:42:16.581196 1685746 cri.go:96] found id: ""
	I1222 01:42:16.581223 1685746 logs.go:282] 0 containers: []
	W1222 01:42:16.581233 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:42:16.581240 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:42:16.581303 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:42:16.607543 1685746 cri.go:96] found id: ""
	I1222 01:42:16.607569 1685746 logs.go:282] 0 containers: []
	W1222 01:42:16.607578 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:42:16.607585 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:42:16.607651 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:42:16.637077 1685746 cri.go:96] found id: ""
	I1222 01:42:16.637106 1685746 logs.go:282] 0 containers: []
	W1222 01:42:16.637116 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:42:16.637123 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:42:16.637183 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:42:16.662155 1685746 cri.go:96] found id: ""
	I1222 01:42:16.662178 1685746 logs.go:282] 0 containers: []
	W1222 01:42:16.662187 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:42:16.662193 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:42:16.662257 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	W1222 01:42:14.749008 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:42:17.249086 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:42:16.694483 1685746 cri.go:96] found id: ""
	I1222 01:42:16.694507 1685746 logs.go:282] 0 containers: []
	W1222 01:42:16.694516 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:42:16.694523 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:42:16.694582 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:42:16.719153 1685746 cri.go:96] found id: ""
	I1222 01:42:16.719178 1685746 logs.go:282] 0 containers: []
	W1222 01:42:16.719188 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:42:16.719195 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:42:16.719258 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:42:16.750982 1685746 cri.go:96] found id: ""
	I1222 01:42:16.751007 1685746 logs.go:282] 0 containers: []
	W1222 01:42:16.751017 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:42:16.751026 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:42:16.751038 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:42:16.809848 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:42:16.809888 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:42:16.828821 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:42:16.828852 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:42:16.896032 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:42:16.888151    7829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:16.888893    7829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:16.890527    7829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:16.890859    7829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:16.892403    7829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:42:16.888151    7829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:16.888893    7829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:16.890527    7829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:16.890859    7829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:16.892403    7829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:42:16.896058 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:42:16.896071 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:42:16.921650 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:42:16.921686 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:42:19.450391 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:42:19.461241 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:42:19.461314 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:42:19.488679 1685746 cri.go:96] found id: ""
	I1222 01:42:19.488705 1685746 logs.go:282] 0 containers: []
	W1222 01:42:19.488715 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:42:19.488722 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:42:19.488784 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:42:19.514947 1685746 cri.go:96] found id: ""
	I1222 01:42:19.514972 1685746 logs.go:282] 0 containers: []
	W1222 01:42:19.514982 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:42:19.514989 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:42:19.515051 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:42:19.541761 1685746 cri.go:96] found id: ""
	I1222 01:42:19.541786 1685746 logs.go:282] 0 containers: []
	W1222 01:42:19.541795 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:42:19.541802 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:42:19.541867 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:42:19.566418 1685746 cri.go:96] found id: ""
	I1222 01:42:19.566441 1685746 logs.go:282] 0 containers: []
	W1222 01:42:19.566450 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:42:19.566456 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:42:19.566515 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:42:19.591707 1685746 cri.go:96] found id: ""
	I1222 01:42:19.591739 1685746 logs.go:282] 0 containers: []
	W1222 01:42:19.591748 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:42:19.591754 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:42:19.591857 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:42:19.618308 1685746 cri.go:96] found id: ""
	I1222 01:42:19.618343 1685746 logs.go:282] 0 containers: []
	W1222 01:42:19.618352 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:42:19.618362 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:42:19.618441 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:42:19.644750 1685746 cri.go:96] found id: ""
	I1222 01:42:19.644791 1685746 logs.go:282] 0 containers: []
	W1222 01:42:19.644801 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:42:19.644808 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:42:19.644883 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:42:19.674267 1685746 cri.go:96] found id: ""
	I1222 01:42:19.674295 1685746 logs.go:282] 0 containers: []
	W1222 01:42:19.674304 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:42:19.674315 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:42:19.674327 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:42:19.689360 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:42:19.689445 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:42:19.766188 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:42:19.757030    7943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:19.758928    7943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:19.760510    7943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:19.760846    7943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:19.762319    7943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:42:19.757030    7943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:19.758928    7943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:19.760510    7943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:19.760846    7943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:19.762319    7943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:42:19.766263 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:42:19.766290 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:42:19.793580 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:42:19.793657 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:42:19.829853 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:42:19.829884 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1222 01:42:19.748284 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:42:22.248100 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:42:22.388471 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:42:22.399089 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:42:22.399192 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:42:22.428498 1685746 cri.go:96] found id: ""
	I1222 01:42:22.428569 1685746 logs.go:282] 0 containers: []
	W1222 01:42:22.428583 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:42:22.428591 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:42:22.428672 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:42:22.458145 1685746 cri.go:96] found id: ""
	I1222 01:42:22.458182 1685746 logs.go:282] 0 containers: []
	W1222 01:42:22.458196 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:42:22.458203 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:42:22.458276 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:42:22.485165 1685746 cri.go:96] found id: ""
	I1222 01:42:22.485202 1685746 logs.go:282] 0 containers: []
	W1222 01:42:22.485212 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:42:22.485218 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:42:22.485283 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:42:22.510263 1685746 cri.go:96] found id: ""
	I1222 01:42:22.510292 1685746 logs.go:282] 0 containers: []
	W1222 01:42:22.510302 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:42:22.510308 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:42:22.510374 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:42:22.539347 1685746 cri.go:96] found id: ""
	I1222 01:42:22.539374 1685746 logs.go:282] 0 containers: []
	W1222 01:42:22.539383 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:42:22.539391 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:42:22.539453 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:42:22.564154 1685746 cri.go:96] found id: ""
	I1222 01:42:22.564182 1685746 logs.go:282] 0 containers: []
	W1222 01:42:22.564193 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:42:22.564205 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:42:22.564311 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:42:22.593661 1685746 cri.go:96] found id: ""
	I1222 01:42:22.593688 1685746 logs.go:282] 0 containers: []
	W1222 01:42:22.593697 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:42:22.593703 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:42:22.593767 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:42:22.618629 1685746 cri.go:96] found id: ""
	I1222 01:42:22.618654 1685746 logs.go:282] 0 containers: []
	W1222 01:42:22.618663 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:42:22.618672 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:42:22.618714 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:42:22.675019 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:42:22.675057 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:42:22.690208 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:42:22.690241 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:42:22.759102 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:42:22.749007    8059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:22.749809    8059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:22.751450    8059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:22.752099    8059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:22.753804    8059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:42:22.749007    8059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:22.749809    8059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:22.751450    8059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:22.752099    8059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:22.753804    8059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:42:22.759127 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:42:22.759140 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:42:22.790419 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:42:22.790453 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:42:25.330239 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:42:25.341121 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:42:25.341190 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:42:25.370417 1685746 cri.go:96] found id: ""
	I1222 01:42:25.370493 1685746 logs.go:282] 0 containers: []
	W1222 01:42:25.370523 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:42:25.370543 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:42:25.370636 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:42:25.399975 1685746 cri.go:96] found id: ""
	I1222 01:42:25.400000 1685746 logs.go:282] 0 containers: []
	W1222 01:42:25.400009 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:42:25.400015 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:42:25.400075 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:42:25.424384 1685746 cri.go:96] found id: ""
	I1222 01:42:25.424414 1685746 logs.go:282] 0 containers: []
	W1222 01:42:25.424424 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:42:25.424431 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:42:25.424491 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:42:25.453828 1685746 cri.go:96] found id: ""
	I1222 01:42:25.453916 1685746 logs.go:282] 0 containers: []
	W1222 01:42:25.453956 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:42:25.453984 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:42:25.454124 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:42:25.480847 1685746 cri.go:96] found id: ""
	I1222 01:42:25.480868 1685746 logs.go:282] 0 containers: []
	W1222 01:42:25.480877 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:42:25.480883 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:42:25.480942 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:42:25.508776 1685746 cri.go:96] found id: ""
	I1222 01:42:25.508801 1685746 logs.go:282] 0 containers: []
	W1222 01:42:25.508810 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:42:25.508817 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:42:25.508877 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:42:25.539362 1685746 cri.go:96] found id: ""
	I1222 01:42:25.539385 1685746 logs.go:282] 0 containers: []
	W1222 01:42:25.539396 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:42:25.539402 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:42:25.539461 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:42:25.566615 1685746 cri.go:96] found id: ""
	I1222 01:42:25.566641 1685746 logs.go:282] 0 containers: []
	W1222 01:42:25.566650 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:42:25.566659 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:42:25.566670 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:42:25.622750 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:42:25.622784 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:42:25.638693 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:42:25.638728 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:42:25.702796 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:42:25.695589    8173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:25.695998    8173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:25.697467    8173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:25.697800    8173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:25.699229    8173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:42:25.695589    8173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:25.695998    8173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:25.697467    8173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:25.697800    8173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:25.699229    8173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:42:25.702823 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:42:25.702835 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:42:25.727901 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:42:25.727938 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 01:42:24.248221 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:42:26.748069 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:42:29.248000 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:42:31.247763 1681323 node_ready.go:38] duration metric: took 6m0.000217195s for node "no-preload-154186" to be "Ready" ...
	I1222 01:42:31.251066 1681323 out.go:203] 
	W1222 01:42:31.253946 1681323 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1222 01:42:31.253969 1681323 out.go:285] * 
	W1222 01:42:31.256107 1681323 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1222 01:42:31.259342 1681323 out.go:203] 
	I1222 01:42:28.269113 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:42:28.280220 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:42:28.280317 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:42:28.305926 1685746 cri.go:96] found id: ""
	I1222 01:42:28.305948 1685746 logs.go:282] 0 containers: []
	W1222 01:42:28.305957 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:42:28.305963 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:42:28.306020 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:42:28.330985 1685746 cri.go:96] found id: ""
	I1222 01:42:28.331010 1685746 logs.go:282] 0 containers: []
	W1222 01:42:28.331020 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:42:28.331026 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:42:28.331086 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:42:28.357992 1685746 cri.go:96] found id: ""
	I1222 01:42:28.358018 1685746 logs.go:282] 0 containers: []
	W1222 01:42:28.358028 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:42:28.358035 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:42:28.358131 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:42:28.384559 1685746 cri.go:96] found id: ""
	I1222 01:42:28.384585 1685746 logs.go:282] 0 containers: []
	W1222 01:42:28.384594 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:42:28.384603 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:42:28.384665 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:42:28.412628 1685746 cri.go:96] found id: ""
	I1222 01:42:28.412650 1685746 logs.go:282] 0 containers: []
	W1222 01:42:28.412659 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:42:28.412665 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:42:28.412731 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:42:28.438582 1685746 cri.go:96] found id: ""
	I1222 01:42:28.438605 1685746 logs.go:282] 0 containers: []
	W1222 01:42:28.438613 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:42:28.438620 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:42:28.438685 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:42:28.468458 1685746 cri.go:96] found id: ""
	I1222 01:42:28.468484 1685746 logs.go:282] 0 containers: []
	W1222 01:42:28.468493 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:42:28.468500 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:42:28.468565 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:42:28.493207 1685746 cri.go:96] found id: ""
	I1222 01:42:28.493231 1685746 logs.go:282] 0 containers: []
	W1222 01:42:28.493239 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:42:28.493249 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:42:28.493260 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:42:28.547741 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:42:28.547777 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:42:28.562578 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:42:28.562608 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:42:28.637227 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:42:28.629669    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:28.630219    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:28.631680    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:28.632052    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:28.633501    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:42:28.629669    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:28.630219    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:28.631680    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:28.632052    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:28.633501    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:42:28.637250 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:42:28.637263 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:42:28.662593 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:42:28.662632 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:42:31.190941 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:42:31.202783 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:42:31.202858 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:42:31.227601 1685746 cri.go:96] found id: ""
	I1222 01:42:31.227625 1685746 logs.go:282] 0 containers: []
	W1222 01:42:31.227633 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:42:31.227642 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:42:31.227718 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:42:31.267011 1685746 cri.go:96] found id: ""
	I1222 01:42:31.267040 1685746 logs.go:282] 0 containers: []
	W1222 01:42:31.267049 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:42:31.267056 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:42:31.267118 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:42:31.363207 1685746 cri.go:96] found id: ""
	I1222 01:42:31.363231 1685746 logs.go:282] 0 containers: []
	W1222 01:42:31.363239 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:42:31.363246 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:42:31.363320 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:42:31.412753 1685746 cri.go:96] found id: ""
	I1222 01:42:31.412780 1685746 logs.go:282] 0 containers: []
	W1222 01:42:31.412788 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:42:31.412796 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:42:31.412858 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:42:31.453115 1685746 cri.go:96] found id: ""
	I1222 01:42:31.453145 1685746 logs.go:282] 0 containers: []
	W1222 01:42:31.453154 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:42:31.453167 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:42:31.453225 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:42:31.492529 1685746 cri.go:96] found id: ""
	I1222 01:42:31.492550 1685746 logs.go:282] 0 containers: []
	W1222 01:42:31.492558 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:42:31.492565 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:42:31.492621 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:42:31.529156 1685746 cri.go:96] found id: ""
	I1222 01:42:31.529179 1685746 logs.go:282] 0 containers: []
	W1222 01:42:31.529187 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:42:31.529193 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:42:31.529252 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:42:31.561255 1685746 cri.go:96] found id: ""
	I1222 01:42:31.561283 1685746 logs.go:282] 0 containers: []
	W1222 01:42:31.561292 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:42:31.561301 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:42:31.561314 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:42:31.622500 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:42:31.622526 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:42:31.690749 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:42:31.690784 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:42:31.706062 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:42:31.706182 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:42:31.827329 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:42:31.792352    8407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:31.804228    8407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:31.804969    8407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:31.820004    8407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:31.820674    8407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:42:31.792352    8407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:31.804228    8407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:31.804969    8407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:31.820004    8407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:31.820674    8407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:42:31.827354 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:42:31.827369 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:42:34.368888 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:42:34.380077 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:42:34.380154 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:42:34.406174 1685746 cri.go:96] found id: ""
	I1222 01:42:34.406198 1685746 logs.go:282] 0 containers: []
	W1222 01:42:34.406207 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:42:34.406213 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:42:34.406280 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:42:34.437127 1685746 cri.go:96] found id: ""
	I1222 01:42:34.437152 1685746 logs.go:282] 0 containers: []
	W1222 01:42:34.437161 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:42:34.437168 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:42:34.437234 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:42:34.462419 1685746 cri.go:96] found id: ""
	I1222 01:42:34.462445 1685746 logs.go:282] 0 containers: []
	W1222 01:42:34.462454 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:42:34.462463 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:42:34.462524 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:42:34.491011 1685746 cri.go:96] found id: ""
	I1222 01:42:34.491039 1685746 logs.go:282] 0 containers: []
	W1222 01:42:34.491049 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:42:34.491056 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:42:34.491117 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:42:34.515544 1685746 cri.go:96] found id: ""
	I1222 01:42:34.515570 1685746 logs.go:282] 0 containers: []
	W1222 01:42:34.515580 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:42:34.515587 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:42:34.515644 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:42:34.543686 1685746 cri.go:96] found id: ""
	I1222 01:42:34.543714 1685746 logs.go:282] 0 containers: []
	W1222 01:42:34.543722 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:42:34.543730 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:42:34.543788 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:42:34.572402 1685746 cri.go:96] found id: ""
	I1222 01:42:34.572427 1685746 logs.go:282] 0 containers: []
	W1222 01:42:34.572436 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:42:34.572442 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:42:34.572561 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:42:34.597762 1685746 cri.go:96] found id: ""
	I1222 01:42:34.597789 1685746 logs.go:282] 0 containers: []
	W1222 01:42:34.597799 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:42:34.597808 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:42:34.597820 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:42:34.622955 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:42:34.622991 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:42:34.651563 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:42:34.651592 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:42:34.708102 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:42:34.708139 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:42:34.723329 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:42:34.723358 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:42:34.788870 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:42:34.780572    8523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:34.781230    8523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:34.782852    8523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:34.783472    8523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:34.785097    8523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:42:34.780572    8523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:34.781230    8523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:34.782852    8523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:34.783472    8523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:34.785097    8523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:42:37.289033 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:42:37.307914 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:42:37.308010 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:42:37.342876 1685746 cri.go:96] found id: ""
	I1222 01:42:37.342916 1685746 logs.go:282] 0 containers: []
	W1222 01:42:37.342925 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:42:37.342932 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:42:37.342994 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:42:37.369883 1685746 cri.go:96] found id: ""
	I1222 01:42:37.369912 1685746 logs.go:282] 0 containers: []
	W1222 01:42:37.369921 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:42:37.369928 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:42:37.369990 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:42:37.399765 1685746 cri.go:96] found id: ""
	I1222 01:42:37.399792 1685746 logs.go:282] 0 containers: []
	W1222 01:42:37.399800 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:42:37.399807 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:42:37.399887 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:42:37.425866 1685746 cri.go:96] found id: ""
	I1222 01:42:37.425894 1685746 logs.go:282] 0 containers: []
	W1222 01:42:37.425904 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:42:37.425911 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:42:37.425976 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:42:37.452177 1685746 cri.go:96] found id: ""
	I1222 01:42:37.452252 1685746 logs.go:282] 0 containers: []
	W1222 01:42:37.452273 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:42:37.452280 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:42:37.452349 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:42:37.478374 1685746 cri.go:96] found id: ""
	I1222 01:42:37.478405 1685746 logs.go:282] 0 containers: []
	W1222 01:42:37.478415 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:42:37.478421 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:42:37.478482 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:42:37.504627 1685746 cri.go:96] found id: ""
	I1222 01:42:37.504663 1685746 logs.go:282] 0 containers: []
	W1222 01:42:37.504672 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:42:37.504679 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:42:37.504785 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:42:37.531304 1685746 cri.go:96] found id: ""
	I1222 01:42:37.531343 1685746 logs.go:282] 0 containers: []
	W1222 01:42:37.531353 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:42:37.531380 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:42:37.531399 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:42:37.559371 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:42:37.559401 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:42:37.614026 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:42:37.614064 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:42:37.630657 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:42:37.630689 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:42:37.698972 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:42:37.690733    8633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:37.691573    8633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:37.693410    8633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:37.694215    8633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:37.695368    8633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:42:37.690733    8633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:37.691573    8633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:37.693410    8633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:37.694215    8633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:37.695368    8633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:42:37.698998 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:42:37.699010 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:42:40.226630 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:42:40.251806 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:42:40.251880 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:42:40.312461 1685746 cri.go:96] found id: ""
	I1222 01:42:40.312484 1685746 logs.go:282] 0 containers: []
	W1222 01:42:40.312493 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:42:40.312499 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:42:40.312559 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:42:40.346654 1685746 cri.go:96] found id: ""
	I1222 01:42:40.346682 1685746 logs.go:282] 0 containers: []
	W1222 01:42:40.346691 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:42:40.346697 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:42:40.346757 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:42:40.376245 1685746 cri.go:96] found id: ""
	I1222 01:42:40.376279 1685746 logs.go:282] 0 containers: []
	W1222 01:42:40.376288 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:42:40.376294 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:42:40.376355 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:42:40.400546 1685746 cri.go:96] found id: ""
	I1222 01:42:40.400572 1685746 logs.go:282] 0 containers: []
	W1222 01:42:40.400581 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:42:40.400588 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:42:40.400647 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:42:40.425326 1685746 cri.go:96] found id: ""
	I1222 01:42:40.425353 1685746 logs.go:282] 0 containers: []
	W1222 01:42:40.425362 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:42:40.425369 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:42:40.425431 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:42:40.449304 1685746 cri.go:96] found id: ""
	I1222 01:42:40.449328 1685746 logs.go:282] 0 containers: []
	W1222 01:42:40.449337 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:42:40.449345 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:42:40.449405 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:42:40.474828 1685746 cri.go:96] found id: ""
	I1222 01:42:40.474854 1685746 logs.go:282] 0 containers: []
	W1222 01:42:40.474863 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:42:40.474870 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:42:40.474931 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:42:40.503909 1685746 cri.go:96] found id: ""
	I1222 01:42:40.503933 1685746 logs.go:282] 0 containers: []
	W1222 01:42:40.503941 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:42:40.503950 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:42:40.503960 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:42:40.559784 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:42:40.559821 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:42:40.575010 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:42:40.575041 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:42:40.643863 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:42:40.635244    8736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:40.635950    8736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:40.637730    8736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:40.638261    8736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:40.639693    8736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:42:40.635244    8736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:40.635950    8736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:40.637730    8736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:40.638261    8736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:40.639693    8736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:42:40.643888 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:42:40.643900 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:42:40.674641 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:42:40.674683 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:42:43.208931 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:42:43.219892 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:42:43.219965 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:42:43.278356 1685746 cri.go:96] found id: ""
	I1222 01:42:43.278383 1685746 logs.go:282] 0 containers: []
	W1222 01:42:43.278393 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:42:43.278399 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:42:43.278468 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:42:43.318802 1685746 cri.go:96] found id: ""
	I1222 01:42:43.318828 1685746 logs.go:282] 0 containers: []
	W1222 01:42:43.318838 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:42:43.318844 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:42:43.318903 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:42:43.351222 1685746 cri.go:96] found id: ""
	I1222 01:42:43.351247 1685746 logs.go:282] 0 containers: []
	W1222 01:42:43.351256 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:42:43.351263 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:42:43.351323 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:42:43.377242 1685746 cri.go:96] found id: ""
	I1222 01:42:43.377267 1685746 logs.go:282] 0 containers: []
	W1222 01:42:43.377275 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:42:43.377282 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:42:43.377346 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:42:43.403326 1685746 cri.go:96] found id: ""
	I1222 01:42:43.403353 1685746 logs.go:282] 0 containers: []
	W1222 01:42:43.403363 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:42:43.403370 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:42:43.403459 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:42:43.429205 1685746 cri.go:96] found id: ""
	I1222 01:42:43.429232 1685746 logs.go:282] 0 containers: []
	W1222 01:42:43.429241 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:42:43.429248 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:42:43.429351 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:42:43.455157 1685746 cri.go:96] found id: ""
	I1222 01:42:43.455188 1685746 logs.go:282] 0 containers: []
	W1222 01:42:43.455198 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:42:43.455204 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:42:43.455274 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:42:43.484817 1685746 cri.go:96] found id: ""
	I1222 01:42:43.484846 1685746 logs.go:282] 0 containers: []
	W1222 01:42:43.484856 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:42:43.484866 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:42:43.484877 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:42:43.544248 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:42:43.544285 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:42:43.559152 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:42:43.559184 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:42:43.623520 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:42:43.614277    8850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:43.615133    8850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:43.616676    8850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:43.617246    8850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:43.619047    8850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:42:43.614277    8850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:43.615133    8850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:43.616676    8850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:43.617246    8850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:43.619047    8850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:42:43.623546 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:42:43.623559 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:42:43.648911 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:42:43.648951 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:42:46.182386 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:42:46.193692 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:42:46.193766 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:42:46.219554 1685746 cri.go:96] found id: ""
	I1222 01:42:46.219592 1685746 logs.go:282] 0 containers: []
	W1222 01:42:46.219602 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:42:46.219608 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:42:46.219667 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:42:46.269097 1685746 cri.go:96] found id: ""
	I1222 01:42:46.269128 1685746 logs.go:282] 0 containers: []
	W1222 01:42:46.269137 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:42:46.269152 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:42:46.269215 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:42:46.315573 1685746 cri.go:96] found id: ""
	I1222 01:42:46.315609 1685746 logs.go:282] 0 containers: []
	W1222 01:42:46.315619 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:42:46.315627 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:42:46.315698 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:42:46.354254 1685746 cri.go:96] found id: ""
	I1222 01:42:46.354291 1685746 logs.go:282] 0 containers: []
	W1222 01:42:46.354300 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:42:46.354311 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:42:46.354385 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:42:46.382733 1685746 cri.go:96] found id: ""
	I1222 01:42:46.382810 1685746 logs.go:282] 0 containers: []
	W1222 01:42:46.382823 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:42:46.382831 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:42:46.382893 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:42:46.409988 1685746 cri.go:96] found id: ""
	I1222 01:42:46.410014 1685746 logs.go:282] 0 containers: []
	W1222 01:42:46.410024 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:42:46.410032 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:42:46.410123 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:42:46.440621 1685746 cri.go:96] found id: ""
	I1222 01:42:46.440645 1685746 logs.go:282] 0 containers: []
	W1222 01:42:46.440654 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:42:46.440661 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:42:46.440726 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:42:46.466426 1685746 cri.go:96] found id: ""
	I1222 01:42:46.466451 1685746 logs.go:282] 0 containers: []
	W1222 01:42:46.466461 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:42:46.466478 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:42:46.466491 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:42:46.522404 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:42:46.522449 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:42:46.538001 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:42:46.538129 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:42:46.608273 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:42:46.599659    8965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:46.600513    8965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:46.602269    8965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:46.602918    8965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:46.604666    8965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:42:46.599659    8965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:46.600513    8965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:46.602269    8965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:46.602918    8965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:46.604666    8965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:42:46.608296 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:42:46.608311 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:42:46.634354 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:42:46.634388 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:42:49.167965 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:42:49.178919 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:42:49.178992 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:42:49.204884 1685746 cri.go:96] found id: ""
	I1222 01:42:49.204909 1685746 logs.go:282] 0 containers: []
	W1222 01:42:49.204917 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:42:49.204924 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:42:49.204992 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:42:49.231503 1685746 cri.go:96] found id: ""
	I1222 01:42:49.231530 1685746 logs.go:282] 0 containers: []
	W1222 01:42:49.231539 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:42:49.231547 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:42:49.231611 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:42:49.274476 1685746 cri.go:96] found id: ""
	I1222 01:42:49.274500 1685746 logs.go:282] 0 containers: []
	W1222 01:42:49.274508 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:42:49.274515 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:42:49.274577 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:42:49.318032 1685746 cri.go:96] found id: ""
	I1222 01:42:49.318054 1685746 logs.go:282] 0 containers: []
	W1222 01:42:49.318063 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:42:49.318069 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:42:49.318163 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:42:49.361375 1685746 cri.go:96] found id: ""
	I1222 01:42:49.361398 1685746 logs.go:282] 0 containers: []
	W1222 01:42:49.361407 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:42:49.361414 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:42:49.361475 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:42:49.389203 1685746 cri.go:96] found id: ""
	I1222 01:42:49.389230 1685746 logs.go:282] 0 containers: []
	W1222 01:42:49.389240 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:42:49.389247 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:42:49.389315 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:42:49.419554 1685746 cri.go:96] found id: ""
	I1222 01:42:49.419579 1685746 logs.go:282] 0 containers: []
	W1222 01:42:49.419588 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:42:49.419595 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:42:49.419656 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:42:49.448457 1685746 cri.go:96] found id: ""
	I1222 01:42:49.448482 1685746 logs.go:282] 0 containers: []
	W1222 01:42:49.448491 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:42:49.448501 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:42:49.448513 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:42:49.477586 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:42:49.477616 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:42:49.534782 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:42:49.534822 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:42:49.550136 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:42:49.550166 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:42:49.618143 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:42:49.609211    9092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:49.610126    9092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:49.611985    9092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:49.612723    9092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:49.614504    9092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:42:49.609211    9092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:49.610126    9092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:49.611985    9092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:49.612723    9092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:49.614504    9092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:42:49.618169 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:42:49.618190 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:42:52.144370 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:42:52.155874 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:42:52.155999 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:42:52.183608 1685746 cri.go:96] found id: ""
	I1222 01:42:52.183633 1685746 logs.go:282] 0 containers: []
	W1222 01:42:52.183641 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:42:52.183648 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:42:52.183710 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:42:52.213975 1685746 cri.go:96] found id: ""
	I1222 01:42:52.214002 1685746 logs.go:282] 0 containers: []
	W1222 01:42:52.214011 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:42:52.214018 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:42:52.214108 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:42:52.260878 1685746 cri.go:96] found id: ""
	I1222 01:42:52.260904 1685746 logs.go:282] 0 containers: []
	W1222 01:42:52.260913 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:42:52.260920 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:42:52.260986 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:42:52.326163 1685746 cri.go:96] found id: ""
	I1222 01:42:52.326191 1685746 logs.go:282] 0 containers: []
	W1222 01:42:52.326200 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:42:52.326206 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:42:52.326268 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:42:52.351586 1685746 cri.go:96] found id: ""
	I1222 01:42:52.351610 1685746 logs.go:282] 0 containers: []
	W1222 01:42:52.351619 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:42:52.351625 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:42:52.351685 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:42:52.378191 1685746 cri.go:96] found id: ""
	I1222 01:42:52.378271 1685746 logs.go:282] 0 containers: []
	W1222 01:42:52.378297 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:42:52.378320 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:42:52.378423 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:42:52.403988 1685746 cri.go:96] found id: ""
	I1222 01:42:52.404014 1685746 logs.go:282] 0 containers: []
	W1222 01:42:52.404024 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:42:52.404030 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:42:52.404115 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:42:52.434842 1685746 cri.go:96] found id: ""
	I1222 01:42:52.434870 1685746 logs.go:282] 0 containers: []
	W1222 01:42:52.434879 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:42:52.434888 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:42:52.434901 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:42:52.493615 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:42:52.493659 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:42:52.509970 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:42:52.510008 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:42:52.573713 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:42:52.565508    9193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:52.566173    9193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:52.567830    9193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:52.568498    9193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:52.570110    9193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:42:52.565508    9193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:52.566173    9193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:52.567830    9193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:52.568498    9193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:52.570110    9193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:42:52.573748 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:42:52.573760 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:42:52.598497 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:42:52.598532 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:42:55.130037 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:42:55.141017 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:42:55.141094 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:42:55.166253 1685746 cri.go:96] found id: ""
	I1222 01:42:55.166279 1685746 logs.go:282] 0 containers: []
	W1222 01:42:55.166289 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:42:55.166298 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:42:55.166358 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:42:55.190818 1685746 cri.go:96] found id: ""
	I1222 01:42:55.190844 1685746 logs.go:282] 0 containers: []
	W1222 01:42:55.190856 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:42:55.190863 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:42:55.190969 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:42:55.216347 1685746 cri.go:96] found id: ""
	I1222 01:42:55.216380 1685746 logs.go:282] 0 containers: []
	W1222 01:42:55.216390 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:42:55.216397 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:42:55.216501 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:42:55.259015 1685746 cri.go:96] found id: ""
	I1222 01:42:55.259091 1685746 logs.go:282] 0 containers: []
	W1222 01:42:55.259115 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:42:55.259135 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:42:55.259247 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:42:55.326026 1685746 cri.go:96] found id: ""
	I1222 01:42:55.326049 1685746 logs.go:282] 0 containers: []
	W1222 01:42:55.326058 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:42:55.326065 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:42:55.326147 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:42:55.350799 1685746 cri.go:96] found id: ""
	I1222 01:42:55.350823 1685746 logs.go:282] 0 containers: []
	W1222 01:42:55.350832 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:42:55.350839 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:42:55.350899 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:42:55.376097 1685746 cri.go:96] found id: ""
	I1222 01:42:55.376123 1685746 logs.go:282] 0 containers: []
	W1222 01:42:55.376133 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:42:55.376139 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:42:55.376200 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:42:55.401620 1685746 cri.go:96] found id: ""
	I1222 01:42:55.401693 1685746 logs.go:282] 0 containers: []
	W1222 01:42:55.401715 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:42:55.401740 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:42:55.401783 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:42:55.434315 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:42:55.434343 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:42:55.489616 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:42:55.489652 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:42:55.504798 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:42:55.504829 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:42:55.569246 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:42:55.560833    9315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:55.561495    9315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:55.563008    9315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:55.563457    9315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:55.564945    9315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:42:55.560833    9315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:55.561495    9315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:55.563008    9315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:55.563457    9315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:55.564945    9315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:42:55.569273 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:42:55.569285 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:42:58.094905 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:42:58.105827 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:42:58.105902 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:42:58.131496 1685746 cri.go:96] found id: ""
	I1222 01:42:58.131522 1685746 logs.go:282] 0 containers: []
	W1222 01:42:58.131531 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:42:58.131538 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:42:58.131602 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:42:58.156152 1685746 cri.go:96] found id: ""
	I1222 01:42:58.156179 1685746 logs.go:282] 0 containers: []
	W1222 01:42:58.156188 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:42:58.156195 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:42:58.156253 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:42:58.182075 1685746 cri.go:96] found id: ""
	I1222 01:42:58.182124 1685746 logs.go:282] 0 containers: []
	W1222 01:42:58.182140 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:42:58.182147 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:42:58.182211 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:42:58.212714 1685746 cri.go:96] found id: ""
	I1222 01:42:58.212737 1685746 logs.go:282] 0 containers: []
	W1222 01:42:58.212746 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:42:58.212752 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:42:58.212811 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:42:58.256896 1685746 cri.go:96] found id: ""
	I1222 01:42:58.256919 1685746 logs.go:282] 0 containers: []
	W1222 01:42:58.256931 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:42:58.256938 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:42:58.257002 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:42:58.314212 1685746 cri.go:96] found id: ""
	I1222 01:42:58.314235 1685746 logs.go:282] 0 containers: []
	W1222 01:42:58.314243 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:42:58.314250 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:42:58.314311 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:42:58.348822 1685746 cri.go:96] found id: ""
	I1222 01:42:58.348844 1685746 logs.go:282] 0 containers: []
	W1222 01:42:58.348853 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:42:58.348860 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:42:58.349006 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:42:58.375112 1685746 cri.go:96] found id: ""
	I1222 01:42:58.375139 1685746 logs.go:282] 0 containers: []
	W1222 01:42:58.375148 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:42:58.375157 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:42:58.375199 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:42:58.440769 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:42:58.432188    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:58.432720    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:58.434403    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:58.435024    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:58.436797    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:42:58.432188    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:58.432720    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:58.434403    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:58.435024    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:58.436797    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:42:58.440793 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:42:58.440807 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:42:58.466180 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:42:58.466214 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:42:58.498249 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:42:58.498277 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:42:58.553912 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:42:58.553948 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:43:01.069587 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:43:01.080494 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:43:01.080569 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:43:01.106366 1685746 cri.go:96] found id: ""
	I1222 01:43:01.106393 1685746 logs.go:282] 0 containers: []
	W1222 01:43:01.106403 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:43:01.106409 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:43:01.106472 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:43:01.134991 1685746 cri.go:96] found id: ""
	I1222 01:43:01.135019 1685746 logs.go:282] 0 containers: []
	W1222 01:43:01.135028 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:43:01.135040 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:43:01.135108 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:43:01.161160 1685746 cri.go:96] found id: ""
	I1222 01:43:01.161188 1685746 logs.go:282] 0 containers: []
	W1222 01:43:01.161198 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:43:01.161205 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:43:01.161268 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:43:01.189244 1685746 cri.go:96] found id: ""
	I1222 01:43:01.189271 1685746 logs.go:282] 0 containers: []
	W1222 01:43:01.189281 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:43:01.189288 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:43:01.189353 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:43:01.216039 1685746 cri.go:96] found id: ""
	I1222 01:43:01.216109 1685746 logs.go:282] 0 containers: []
	W1222 01:43:01.216123 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:43:01.216131 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:43:01.216206 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:43:01.255772 1685746 cri.go:96] found id: ""
	I1222 01:43:01.255803 1685746 logs.go:282] 0 containers: []
	W1222 01:43:01.255812 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:43:01.255818 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:43:01.255880 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:43:01.331745 1685746 cri.go:96] found id: ""
	I1222 01:43:01.331771 1685746 logs.go:282] 0 containers: []
	W1222 01:43:01.331780 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:43:01.331787 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:43:01.331856 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:43:01.360958 1685746 cri.go:96] found id: ""
	I1222 01:43:01.360985 1685746 logs.go:282] 0 containers: []
	W1222 01:43:01.360995 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:43:01.361003 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:43:01.361014 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:43:01.416443 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:43:01.416479 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:43:01.433706 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:43:01.433735 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:43:01.504365 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:43:01.496722    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:01.497337    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:01.498577    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:01.499029    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:01.500569    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:43:01.496722    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:01.497337    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:01.498577    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:01.499029    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:01.500569    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:43:01.504393 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:43:01.504405 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:43:01.530386 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:43:01.530421 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:43:04.060702 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:43:04.074701 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:43:04.074781 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:43:04.104007 1685746 cri.go:96] found id: ""
	I1222 01:43:04.104034 1685746 logs.go:282] 0 containers: []
	W1222 01:43:04.104043 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:43:04.104050 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:43:04.104110 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:43:04.129051 1685746 cri.go:96] found id: ""
	I1222 01:43:04.129081 1685746 logs.go:282] 0 containers: []
	W1222 01:43:04.129091 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:43:04.129098 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:43:04.129160 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:43:04.155234 1685746 cri.go:96] found id: ""
	I1222 01:43:04.155260 1685746 logs.go:282] 0 containers: []
	W1222 01:43:04.155275 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:43:04.155282 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:43:04.155344 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:43:04.180095 1685746 cri.go:96] found id: ""
	I1222 01:43:04.180120 1685746 logs.go:282] 0 containers: []
	W1222 01:43:04.180130 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:43:04.180137 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:43:04.180199 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:43:04.204953 1685746 cri.go:96] found id: ""
	I1222 01:43:04.204976 1685746 logs.go:282] 0 containers: []
	W1222 01:43:04.204984 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:43:04.204991 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:43:04.205052 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:43:04.231351 1685746 cri.go:96] found id: ""
	I1222 01:43:04.231376 1685746 logs.go:282] 0 containers: []
	W1222 01:43:04.231385 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:43:04.231392 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:43:04.231452 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:43:04.269450 1685746 cri.go:96] found id: ""
	I1222 01:43:04.269476 1685746 logs.go:282] 0 containers: []
	W1222 01:43:04.269485 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:43:04.269492 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:43:04.269556 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:43:04.310137 1685746 cri.go:96] found id: ""
	I1222 01:43:04.310210 1685746 logs.go:282] 0 containers: []
	W1222 01:43:04.310247 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:43:04.310276 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:43:04.310304 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:43:04.330066 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:43:04.330204 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:43:04.398531 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:43:04.389653    9635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:04.390276    9635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:04.391374    9635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:04.392896    9635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:04.393293    9635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:43:04.389653    9635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:04.390276    9635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:04.391374    9635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:04.392896    9635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:04.393293    9635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:43:04.398600 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:43:04.398622 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:43:04.423684 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:43:04.423715 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:43:04.455847 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:43:04.455915 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:43:07.011267 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:43:07.022247 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:43:07.022373 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:43:07.047710 1685746 cri.go:96] found id: ""
	I1222 01:43:07.047737 1685746 logs.go:282] 0 containers: []
	W1222 01:43:07.047746 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:43:07.047755 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:43:07.047817 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:43:07.071622 1685746 cri.go:96] found id: ""
	I1222 01:43:07.071644 1685746 logs.go:282] 0 containers: []
	W1222 01:43:07.071653 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:43:07.071662 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:43:07.071724 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:43:07.100514 1685746 cri.go:96] found id: ""
	I1222 01:43:07.100539 1685746 logs.go:282] 0 containers: []
	W1222 01:43:07.100548 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:43:07.100555 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:43:07.100622 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:43:07.126740 1685746 cri.go:96] found id: ""
	I1222 01:43:07.126810 1685746 logs.go:282] 0 containers: []
	W1222 01:43:07.126833 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:43:07.126845 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:43:07.126921 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:43:07.156147 1685746 cri.go:96] found id: ""
	I1222 01:43:07.156174 1685746 logs.go:282] 0 containers: []
	W1222 01:43:07.156184 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:43:07.156190 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:43:07.156268 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:43:07.185551 1685746 cri.go:96] found id: ""
	I1222 01:43:07.185574 1685746 logs.go:282] 0 containers: []
	W1222 01:43:07.185583 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:43:07.185589 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:43:07.185670 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:43:07.210495 1685746 cri.go:96] found id: ""
	I1222 01:43:07.210563 1685746 logs.go:282] 0 containers: []
	W1222 01:43:07.210585 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:43:07.210608 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:43:07.210679 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:43:07.234671 1685746 cri.go:96] found id: ""
	I1222 01:43:07.234751 1685746 logs.go:282] 0 containers: []
	W1222 01:43:07.234775 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:43:07.234799 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:43:07.234847 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:43:07.318902 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:43:07.318936 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:43:07.334947 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:43:07.334977 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:43:07.400498 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:43:07.392484    9748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:07.392898    9748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:07.394668    9748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:07.395165    9748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:07.396785    9748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:43:07.392484    9748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:07.392898    9748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:07.394668    9748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:07.395165    9748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:07.396785    9748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:43:07.400520 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:43:07.400534 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:43:07.425576 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:43:07.425613 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:43:09.957230 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:43:09.968065 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:43:09.968142 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:43:09.993760 1685746 cri.go:96] found id: ""
	I1222 01:43:09.993785 1685746 logs.go:282] 0 containers: []
	W1222 01:43:09.993794 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:43:09.993802 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:43:09.993870 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:43:10.024110 1685746 cri.go:96] found id: ""
	I1222 01:43:10.024140 1685746 logs.go:282] 0 containers: []
	W1222 01:43:10.024151 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:43:10.024157 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:43:10.024232 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:43:10.053092 1685746 cri.go:96] found id: ""
	I1222 01:43:10.053122 1685746 logs.go:282] 0 containers: []
	W1222 01:43:10.053132 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:43:10.053138 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:43:10.053203 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:43:10.078967 1685746 cri.go:96] found id: ""
	I1222 01:43:10.078994 1685746 logs.go:282] 0 containers: []
	W1222 01:43:10.079004 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:43:10.079011 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:43:10.079079 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:43:10.105969 1685746 cri.go:96] found id: ""
	I1222 01:43:10.105993 1685746 logs.go:282] 0 containers: []
	W1222 01:43:10.106001 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:43:10.106008 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:43:10.106164 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:43:10.132413 1685746 cri.go:96] found id: ""
	I1222 01:43:10.132448 1685746 logs.go:282] 0 containers: []
	W1222 01:43:10.132457 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:43:10.132464 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:43:10.132526 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:43:10.158912 1685746 cri.go:96] found id: ""
	I1222 01:43:10.158941 1685746 logs.go:282] 0 containers: []
	W1222 01:43:10.158950 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:43:10.158957 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:43:10.159038 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:43:10.185594 1685746 cri.go:96] found id: ""
	I1222 01:43:10.185621 1685746 logs.go:282] 0 containers: []
	W1222 01:43:10.185630 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:43:10.185639 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:43:10.185681 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:43:10.214349 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:43:10.214378 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:43:10.274002 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:43:10.274096 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:43:10.289686 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:43:10.289761 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:43:10.375337 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:43:10.363816    9875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:10.364613    9875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:10.366668    9875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:10.368169    9875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:10.370480    9875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:43:10.363816    9875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:10.364613    9875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:10.366668    9875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:10.368169    9875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:10.370480    9875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:43:10.375413 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:43:10.375441 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:43:12.901196 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:43:12.911625 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:43:12.911710 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:43:12.936713 1685746 cri.go:96] found id: ""
	I1222 01:43:12.936738 1685746 logs.go:282] 0 containers: []
	W1222 01:43:12.936747 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:43:12.936753 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:43:12.936827 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:43:12.961849 1685746 cri.go:96] found id: ""
	I1222 01:43:12.961870 1685746 logs.go:282] 0 containers: []
	W1222 01:43:12.961879 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:43:12.961888 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:43:12.961950 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:43:12.990893 1685746 cri.go:96] found id: ""
	I1222 01:43:12.990919 1685746 logs.go:282] 0 containers: []
	W1222 01:43:12.990929 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:43:12.990935 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:43:12.990996 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:43:13.033584 1685746 cri.go:96] found id: ""
	I1222 01:43:13.033611 1685746 logs.go:282] 0 containers: []
	W1222 01:43:13.033621 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:43:13.033628 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:43:13.033691 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:43:13.062192 1685746 cri.go:96] found id: ""
	I1222 01:43:13.062216 1685746 logs.go:282] 0 containers: []
	W1222 01:43:13.062225 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:43:13.062232 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:43:13.062297 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:43:13.088173 1685746 cri.go:96] found id: ""
	I1222 01:43:13.088213 1685746 logs.go:282] 0 containers: []
	W1222 01:43:13.088223 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:43:13.088230 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:43:13.088312 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:43:13.115014 1685746 cri.go:96] found id: ""
	I1222 01:43:13.115051 1685746 logs.go:282] 0 containers: []
	W1222 01:43:13.115062 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:43:13.115069 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:43:13.115147 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:43:13.140656 1685746 cri.go:96] found id: ""
	I1222 01:43:13.140691 1685746 logs.go:282] 0 containers: []
	W1222 01:43:13.140700 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:43:13.140710 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:43:13.140722 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:43:13.177585 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:43:13.177660 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:43:13.233128 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:43:13.233162 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:43:13.251827 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:43:13.251907 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:43:13.360494 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:43:13.352114    9990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:13.352851    9990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:13.354529    9990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:13.355189    9990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:13.356849    9990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:43:13.352114    9990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:13.352851    9990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:13.354529    9990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:13.355189    9990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:13.356849    9990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:43:13.360570 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:43:13.360589 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:43:15.887876 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:43:15.898631 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:43:15.898708 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:43:15.923707 1685746 cri.go:96] found id: ""
	I1222 01:43:15.923732 1685746 logs.go:282] 0 containers: []
	W1222 01:43:15.923743 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:43:15.923750 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:43:15.923829 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:43:15.950453 1685746 cri.go:96] found id: ""
	I1222 01:43:15.950478 1685746 logs.go:282] 0 containers: []
	W1222 01:43:15.950492 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:43:15.950498 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:43:15.950612 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:43:15.975355 1685746 cri.go:96] found id: ""
	I1222 01:43:15.975436 1685746 logs.go:282] 0 containers: []
	W1222 01:43:15.975460 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:43:15.975475 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:43:15.975549 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:43:16.000992 1685746 cri.go:96] found id: ""
	I1222 01:43:16.001026 1685746 logs.go:282] 0 containers: []
	W1222 01:43:16.001036 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:43:16.001043 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:43:16.001134 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:43:16.033538 1685746 cri.go:96] found id: ""
	I1222 01:43:16.033563 1685746 logs.go:282] 0 containers: []
	W1222 01:43:16.033572 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:43:16.033578 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:43:16.033641 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:43:16.059451 1685746 cri.go:96] found id: ""
	I1222 01:43:16.059476 1685746 logs.go:282] 0 containers: []
	W1222 01:43:16.059486 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:43:16.059492 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:43:16.059556 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:43:16.085491 1685746 cri.go:96] found id: ""
	I1222 01:43:16.085515 1685746 logs.go:282] 0 containers: []
	W1222 01:43:16.085524 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:43:16.085530 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:43:16.085598 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:43:16.111197 1685746 cri.go:96] found id: ""
	I1222 01:43:16.111220 1685746 logs.go:282] 0 containers: []
	W1222 01:43:16.111228 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:43:16.111237 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:43:16.111249 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:43:16.167058 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:43:16.167095 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:43:16.182867 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:43:16.182947 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:43:16.303679 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:43:16.271313   10085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:16.272086   10085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:16.276862   10085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:16.277203   10085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:16.278713   10085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:43:16.271313   10085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:16.272086   10085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:16.276862   10085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:16.277203   10085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:16.278713   10085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:43:16.303753 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:43:16.303780 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:43:16.336416 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:43:16.336497 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:43:18.869703 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:43:18.880527 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:43:18.880602 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:43:18.906051 1685746 cri.go:96] found id: ""
	I1222 01:43:18.906102 1685746 logs.go:282] 0 containers: []
	W1222 01:43:18.906112 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:43:18.906119 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:43:18.906181 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:43:18.931999 1685746 cri.go:96] found id: ""
	I1222 01:43:18.932027 1685746 logs.go:282] 0 containers: []
	W1222 01:43:18.932036 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:43:18.932043 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:43:18.932110 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:43:18.959202 1685746 cri.go:96] found id: ""
	I1222 01:43:18.959230 1685746 logs.go:282] 0 containers: []
	W1222 01:43:18.959239 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:43:18.959246 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:43:18.959307 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:43:18.988050 1685746 cri.go:96] found id: ""
	I1222 01:43:18.988075 1685746 logs.go:282] 0 containers: []
	W1222 01:43:18.988084 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:43:18.988091 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:43:18.988179 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:43:19.014062 1685746 cri.go:96] found id: ""
	I1222 01:43:19.014116 1685746 logs.go:282] 0 containers: []
	W1222 01:43:19.014125 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:43:19.014132 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:43:19.014197 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:43:19.041419 1685746 cri.go:96] found id: ""
	I1222 01:43:19.041454 1685746 logs.go:282] 0 containers: []
	W1222 01:43:19.041464 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:43:19.041471 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:43:19.041548 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:43:19.067079 1685746 cri.go:96] found id: ""
	I1222 01:43:19.067114 1685746 logs.go:282] 0 containers: []
	W1222 01:43:19.067123 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:43:19.067130 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:43:19.067199 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:43:19.093005 1685746 cri.go:96] found id: ""
	I1222 01:43:19.093041 1685746 logs.go:282] 0 containers: []
	W1222 01:43:19.093050 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:43:19.093059 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:43:19.093070 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:43:19.148083 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:43:19.148119 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:43:19.163510 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:43:19.163547 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:43:19.228482 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:43:19.219765   10199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:19.220173   10199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:19.221836   10199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:19.222307   10199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:19.223917   10199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:43:19.219765   10199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:19.220173   10199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:19.221836   10199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:19.222307   10199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:19.223917   10199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:43:19.228505 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:43:19.228519 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:43:19.264345 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:43:19.264402 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:43:21.823213 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:43:21.834353 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:43:21.834427 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:43:21.860777 1685746 cri.go:96] found id: ""
	I1222 01:43:21.860805 1685746 logs.go:282] 0 containers: []
	W1222 01:43:21.860815 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:43:21.860823 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:43:21.860889 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:43:21.889075 1685746 cri.go:96] found id: ""
	I1222 01:43:21.889150 1685746 logs.go:282] 0 containers: []
	W1222 01:43:21.889173 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:43:21.889195 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:43:21.889284 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:43:21.915306 1685746 cri.go:96] found id: ""
	I1222 01:43:21.915334 1685746 logs.go:282] 0 containers: []
	W1222 01:43:21.915343 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:43:21.915349 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:43:21.915413 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:43:21.940239 1685746 cri.go:96] found id: ""
	I1222 01:43:21.940610 1685746 logs.go:282] 0 containers: []
	W1222 01:43:21.940624 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:43:21.940633 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:43:21.940694 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:43:21.966280 1685746 cri.go:96] found id: ""
	I1222 01:43:21.966307 1685746 logs.go:282] 0 containers: []
	W1222 01:43:21.966316 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:43:21.966323 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:43:21.966392 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:43:21.991888 1685746 cri.go:96] found id: ""
	I1222 01:43:21.991916 1685746 logs.go:282] 0 containers: []
	W1222 01:43:21.991925 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:43:21.991934 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:43:21.991993 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:43:22.021851 1685746 cri.go:96] found id: ""
	I1222 01:43:22.021878 1685746 logs.go:282] 0 containers: []
	W1222 01:43:22.021888 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:43:22.021895 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:43:22.021962 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:43:22.052435 1685746 cri.go:96] found id: ""
	I1222 01:43:22.052464 1685746 logs.go:282] 0 containers: []
	W1222 01:43:22.052473 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:43:22.052483 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:43:22.052495 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:43:22.128628 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:43:22.119151   10309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:22.120170   10309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:22.122258   10309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:22.122895   10309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:22.124723   10309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:43:22.119151   10309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:22.120170   10309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:22.122258   10309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:22.122895   10309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:22.124723   10309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:43:22.128653 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:43:22.128668 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:43:22.154140 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:43:22.154180 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:43:22.190762 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:43:22.190790 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:43:22.254223 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:43:22.254264 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:43:24.790679 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:43:24.801308 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:43:24.801380 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:43:24.826466 1685746 cri.go:96] found id: ""
	I1222 01:43:24.826492 1685746 logs.go:282] 0 containers: []
	W1222 01:43:24.826501 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:43:24.826508 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:43:24.826573 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:43:24.852169 1685746 cri.go:96] found id: ""
	I1222 01:43:24.852196 1685746 logs.go:282] 0 containers: []
	W1222 01:43:24.852206 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:43:24.852212 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:43:24.852277 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:43:24.876880 1685746 cri.go:96] found id: ""
	I1222 01:43:24.876906 1685746 logs.go:282] 0 containers: []
	W1222 01:43:24.876915 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:43:24.876922 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:43:24.876986 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:43:24.902741 1685746 cri.go:96] found id: ""
	I1222 01:43:24.902769 1685746 logs.go:282] 0 containers: []
	W1222 01:43:24.902778 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:43:24.902785 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:43:24.902851 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:43:24.928580 1685746 cri.go:96] found id: ""
	I1222 01:43:24.928603 1685746 logs.go:282] 0 containers: []
	W1222 01:43:24.928612 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:43:24.928618 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:43:24.928686 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:43:24.958505 1685746 cri.go:96] found id: ""
	I1222 01:43:24.958533 1685746 logs.go:282] 0 containers: []
	W1222 01:43:24.958542 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:43:24.958548 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:43:24.958610 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:43:24.988354 1685746 cri.go:96] found id: ""
	I1222 01:43:24.988394 1685746 logs.go:282] 0 containers: []
	W1222 01:43:24.988403 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:43:24.988410 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:43:24.988471 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:43:25.022402 1685746 cri.go:96] found id: ""
	I1222 01:43:25.022445 1685746 logs.go:282] 0 containers: []
	W1222 01:43:25.022455 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:43:25.022465 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:43:25.022477 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:43:25.090031 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:43:25.080604   10424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:25.081363   10424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:25.083352   10424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:25.084129   10424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:25.086042   10424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:43:25.080604   10424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:25.081363   10424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:25.083352   10424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:25.084129   10424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:25.086042   10424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:43:25.090122 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:43:25.090152 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:43:25.117050 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:43:25.117090 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:43:25.146413 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:43:25.146443 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:43:25.203377 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:43:25.203415 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:43:27.718901 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:43:27.729888 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:43:27.729962 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:43:27.753619 1685746 cri.go:96] found id: ""
	I1222 01:43:27.753643 1685746 logs.go:282] 0 containers: []
	W1222 01:43:27.753651 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:43:27.753657 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:43:27.753734 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:43:27.778439 1685746 cri.go:96] found id: ""
	I1222 01:43:27.778468 1685746 logs.go:282] 0 containers: []
	W1222 01:43:27.778477 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:43:27.778484 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:43:27.778549 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:43:27.803747 1685746 cri.go:96] found id: ""
	I1222 01:43:27.803776 1685746 logs.go:282] 0 containers: []
	W1222 01:43:27.803786 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:43:27.803792 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:43:27.803851 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:43:27.833272 1685746 cri.go:96] found id: ""
	I1222 01:43:27.833295 1685746 logs.go:282] 0 containers: []
	W1222 01:43:27.833303 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:43:27.833310 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:43:27.833383 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:43:27.858574 1685746 cri.go:96] found id: ""
	I1222 01:43:27.858602 1685746 logs.go:282] 0 containers: []
	W1222 01:43:27.858613 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:43:27.858619 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:43:27.858680 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:43:27.884333 1685746 cri.go:96] found id: ""
	I1222 01:43:27.884361 1685746 logs.go:282] 0 containers: []
	W1222 01:43:27.884418 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:43:27.884434 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:43:27.884509 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:43:27.914000 1685746 cri.go:96] found id: ""
	I1222 01:43:27.914111 1685746 logs.go:282] 0 containers: []
	W1222 01:43:27.914145 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:43:27.914159 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:43:27.914221 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:43:27.939204 1685746 cri.go:96] found id: ""
	I1222 01:43:27.939228 1685746 logs.go:282] 0 containers: []
	W1222 01:43:27.939237 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:43:27.939246 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:43:27.939257 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:43:27.953702 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:43:27.953728 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:43:28.021111 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:43:28.012570   10545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:28.013161   10545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:28.014852   10545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:28.015378   10545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:28.017163   10545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:43:28.012570   10545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:28.013161   10545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:28.014852   10545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:28.015378   10545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:28.017163   10545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:43:28.021131 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:43:28.021144 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:43:28.048052 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:43:28.048090 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:43:28.080739 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:43:28.080776 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:43:30.641402 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:43:30.652837 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:43:30.652908 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:43:30.679700 1685746 cri.go:96] found id: ""
	I1222 01:43:30.679727 1685746 logs.go:282] 0 containers: []
	W1222 01:43:30.679736 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:43:30.679743 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:43:30.679872 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:43:30.708517 1685746 cri.go:96] found id: ""
	I1222 01:43:30.708545 1685746 logs.go:282] 0 containers: []
	W1222 01:43:30.708554 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:43:30.708561 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:43:30.708622 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:43:30.737801 1685746 cri.go:96] found id: ""
	I1222 01:43:30.737829 1685746 logs.go:282] 0 containers: []
	W1222 01:43:30.737838 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:43:30.737845 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:43:30.737916 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:43:30.764096 1685746 cri.go:96] found id: ""
	I1222 01:43:30.764124 1685746 logs.go:282] 0 containers: []
	W1222 01:43:30.764134 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:43:30.764141 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:43:30.764252 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:43:30.789565 1685746 cri.go:96] found id: ""
	I1222 01:43:30.789591 1685746 logs.go:282] 0 containers: []
	W1222 01:43:30.789599 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:43:30.789607 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:43:30.789684 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:43:30.822764 1685746 cri.go:96] found id: ""
	I1222 01:43:30.822833 1685746 logs.go:282] 0 containers: []
	W1222 01:43:30.822857 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:43:30.822871 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:43:30.822957 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:43:30.848727 1685746 cri.go:96] found id: ""
	I1222 01:43:30.848754 1685746 logs.go:282] 0 containers: []
	W1222 01:43:30.848763 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:43:30.848770 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:43:30.848830 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:43:30.876920 1685746 cri.go:96] found id: ""
	I1222 01:43:30.876945 1685746 logs.go:282] 0 containers: []
	W1222 01:43:30.876954 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:43:30.876963 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:43:30.876974 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:43:30.932977 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:43:30.933015 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:43:30.950177 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:43:30.950205 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:43:31.021720 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:43:31.012613   10658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:31.013393   10658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:31.015106   10658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:31.015679   10658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:31.017436   10658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:43:31.012613   10658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:31.013393   10658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:31.015106   10658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:31.015679   10658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:31.017436   10658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:43:31.021745 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:43:31.021757 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:43:31.047873 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:43:31.047908 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:43:33.582285 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:43:33.593589 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:43:33.593677 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:43:33.619720 1685746 cri.go:96] found id: ""
	I1222 01:43:33.619746 1685746 logs.go:282] 0 containers: []
	W1222 01:43:33.619755 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:43:33.619762 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:43:33.619823 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:43:33.644535 1685746 cri.go:96] found id: ""
	I1222 01:43:33.644558 1685746 logs.go:282] 0 containers: []
	W1222 01:43:33.644567 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:43:33.644573 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:43:33.644636 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:43:33.674069 1685746 cri.go:96] found id: ""
	I1222 01:43:33.674133 1685746 logs.go:282] 0 containers: []
	W1222 01:43:33.674144 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:43:33.674151 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:43:33.674216 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:43:33.700076 1685746 cri.go:96] found id: ""
	I1222 01:43:33.700102 1685746 logs.go:282] 0 containers: []
	W1222 01:43:33.700111 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:43:33.700118 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:43:33.700179 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:43:33.725155 1685746 cri.go:96] found id: ""
	I1222 01:43:33.725182 1685746 logs.go:282] 0 containers: []
	W1222 01:43:33.725192 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:43:33.725199 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:43:33.725259 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:43:33.752045 1685746 cri.go:96] found id: ""
	I1222 01:43:33.752120 1685746 logs.go:282] 0 containers: []
	W1222 01:43:33.752144 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:43:33.752166 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:43:33.752270 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:43:33.776869 1685746 cri.go:96] found id: ""
	I1222 01:43:33.776897 1685746 logs.go:282] 0 containers: []
	W1222 01:43:33.776917 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:43:33.776925 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:43:33.776995 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:43:33.804537 1685746 cri.go:96] found id: ""
	I1222 01:43:33.804559 1685746 logs.go:282] 0 containers: []
	W1222 01:43:33.804568 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:43:33.804577 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:43:33.804589 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:43:33.868017 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:43:33.859678   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:33.860434   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:33.862123   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:33.862611   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:33.864178   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:43:33.859678   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:33.860434   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:33.862123   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:33.862611   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:33.864178   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:43:33.868038 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:43:33.868050 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:43:33.893225 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:43:33.893268 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:43:33.925850 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:43:33.925880 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:43:33.984794 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:43:33.984827 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:43:36.500237 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:43:36.517959 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:43:36.518035 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:43:36.566551 1685746 cri.go:96] found id: ""
	I1222 01:43:36.566578 1685746 logs.go:282] 0 containers: []
	W1222 01:43:36.566587 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:43:36.566594 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:43:36.566675 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:43:36.601952 1685746 cri.go:96] found id: ""
	I1222 01:43:36.601979 1685746 logs.go:282] 0 containers: []
	W1222 01:43:36.601988 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:43:36.601994 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:43:36.602069 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:43:36.628093 1685746 cri.go:96] found id: ""
	I1222 01:43:36.628123 1685746 logs.go:282] 0 containers: []
	W1222 01:43:36.628132 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:43:36.628138 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:43:36.628199 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:43:36.653428 1685746 cri.go:96] found id: ""
	I1222 01:43:36.653457 1685746 logs.go:282] 0 containers: []
	W1222 01:43:36.653471 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:43:36.653478 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:43:36.653536 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:43:36.680092 1685746 cri.go:96] found id: ""
	I1222 01:43:36.680115 1685746 logs.go:282] 0 containers: []
	W1222 01:43:36.680124 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:43:36.680130 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:43:36.680189 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:43:36.706982 1685746 cri.go:96] found id: ""
	I1222 01:43:36.707020 1685746 logs.go:282] 0 containers: []
	W1222 01:43:36.707030 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:43:36.707037 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:43:36.707112 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:43:36.731661 1685746 cri.go:96] found id: ""
	I1222 01:43:36.731738 1685746 logs.go:282] 0 containers: []
	W1222 01:43:36.731760 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:43:36.731783 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:43:36.731878 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:43:36.759936 1685746 cri.go:96] found id: ""
	I1222 01:43:36.759958 1685746 logs.go:282] 0 containers: []
	W1222 01:43:36.759966 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:43:36.759975 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:43:36.759986 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:43:36.774574 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:43:36.774601 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:43:36.840390 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:43:36.831270   10885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:36.832097   10885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:36.833923   10885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:36.834552   10885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:36.836202   10885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:43:36.831270   10885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:36.832097   10885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:36.833923   10885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:36.834552   10885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:36.836202   10885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:43:36.840453 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:43:36.840474 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:43:36.865823 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:43:36.865861 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:43:36.895884 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:43:36.895914 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:43:39.451426 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:43:39.462101 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:43:39.462175 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:43:39.492238 1685746 cri.go:96] found id: ""
	I1222 01:43:39.492261 1685746 logs.go:282] 0 containers: []
	W1222 01:43:39.492270 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:43:39.492281 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:43:39.492355 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:43:39.573214 1685746 cri.go:96] found id: ""
	I1222 01:43:39.573236 1685746 logs.go:282] 0 containers: []
	W1222 01:43:39.573244 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:43:39.573251 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:43:39.573323 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:43:39.599147 1685746 cri.go:96] found id: ""
	I1222 01:43:39.599172 1685746 logs.go:282] 0 containers: []
	W1222 01:43:39.599181 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:43:39.599188 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:43:39.599251 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:43:39.624765 1685746 cri.go:96] found id: ""
	I1222 01:43:39.624850 1685746 logs.go:282] 0 containers: []
	W1222 01:43:39.624874 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:43:39.624915 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:43:39.625014 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:43:39.656217 1685746 cri.go:96] found id: ""
	I1222 01:43:39.656244 1685746 logs.go:282] 0 containers: []
	W1222 01:43:39.656253 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:43:39.656260 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:43:39.656349 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:43:39.682103 1685746 cri.go:96] found id: ""
	I1222 01:43:39.682127 1685746 logs.go:282] 0 containers: []
	W1222 01:43:39.682136 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:43:39.682143 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:43:39.682211 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:43:39.707971 1685746 cri.go:96] found id: ""
	I1222 01:43:39.707999 1685746 logs.go:282] 0 containers: []
	W1222 01:43:39.708008 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:43:39.708015 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:43:39.708075 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:43:39.737148 1685746 cri.go:96] found id: ""
	I1222 01:43:39.737175 1685746 logs.go:282] 0 containers: []
	W1222 01:43:39.737184 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:43:39.737194 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:43:39.737210 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:43:39.805404 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:43:39.797552   10995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:39.798323   10995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:39.799828   10995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:39.800272   10995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:39.801768   10995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:43:39.797552   10995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:39.798323   10995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:39.799828   10995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:39.800272   10995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:39.801768   10995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:43:39.805427 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:43:39.805441 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:43:39.835140 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:43:39.835180 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:43:39.864203 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:43:39.864232 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:43:39.919399 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:43:39.919435 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:43:42.434907 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:43:42.447524 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:43:42.447601 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:43:42.474430 1685746 cri.go:96] found id: ""
	I1222 01:43:42.474452 1685746 logs.go:282] 0 containers: []
	W1222 01:43:42.474468 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:43:42.474475 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:43:42.474534 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:43:42.539132 1685746 cri.go:96] found id: ""
	I1222 01:43:42.539154 1685746 logs.go:282] 0 containers: []
	W1222 01:43:42.539178 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:43:42.539186 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:43:42.539287 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:43:42.575001 1685746 cri.go:96] found id: ""
	I1222 01:43:42.575023 1685746 logs.go:282] 0 containers: []
	W1222 01:43:42.575031 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:43:42.575037 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:43:42.575095 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:43:42.599923 1685746 cri.go:96] found id: ""
	I1222 01:43:42.599947 1685746 logs.go:282] 0 containers: []
	W1222 01:43:42.599956 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:43:42.599963 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:43:42.600027 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:43:42.624602 1685746 cri.go:96] found id: ""
	I1222 01:43:42.624630 1685746 logs.go:282] 0 containers: []
	W1222 01:43:42.624640 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:43:42.624646 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:43:42.624707 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:43:42.649899 1685746 cri.go:96] found id: ""
	I1222 01:43:42.649925 1685746 logs.go:282] 0 containers: []
	W1222 01:43:42.649934 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:43:42.649941 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:43:42.650001 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:43:42.675756 1685746 cri.go:96] found id: ""
	I1222 01:43:42.675836 1685746 logs.go:282] 0 containers: []
	W1222 01:43:42.675860 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:43:42.675897 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:43:42.675973 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:43:42.702958 1685746 cri.go:96] found id: ""
	I1222 01:43:42.702995 1685746 logs.go:282] 0 containers: []
	W1222 01:43:42.703005 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:43:42.703014 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:43:42.703025 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:43:42.759487 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:43:42.759526 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:43:42.774803 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:43:42.774835 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:43:42.841752 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:43:42.833129   11114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:42.833574   11114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:42.835289   11114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:42.835969   11114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:42.837683   11114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:43:42.833129   11114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:42.833574   11114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:42.835289   11114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:42.835969   11114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:42.837683   11114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:43:42.841776 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:43:42.841790 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:43:42.868632 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:43:42.868666 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:43:45.400104 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:43:45.410950 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:43:45.411071 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:43:45.436920 1685746 cri.go:96] found id: ""
	I1222 01:43:45.436957 1685746 logs.go:282] 0 containers: []
	W1222 01:43:45.436966 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:43:45.436973 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:43:45.437044 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:43:45.464719 1685746 cri.go:96] found id: ""
	I1222 01:43:45.464755 1685746 logs.go:282] 0 containers: []
	W1222 01:43:45.464765 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:43:45.464771 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:43:45.464841 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:43:45.501180 1685746 cri.go:96] found id: ""
	I1222 01:43:45.501207 1685746 logs.go:282] 0 containers: []
	W1222 01:43:45.501226 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:43:45.501234 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:43:45.501305 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:43:45.547294 1685746 cri.go:96] found id: ""
	I1222 01:43:45.547339 1685746 logs.go:282] 0 containers: []
	W1222 01:43:45.547350 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:43:45.547357 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:43:45.547435 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:43:45.581484 1685746 cri.go:96] found id: ""
	I1222 01:43:45.581526 1685746 logs.go:282] 0 containers: []
	W1222 01:43:45.581535 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:43:45.581542 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:43:45.581613 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:43:45.610563 1685746 cri.go:96] found id: ""
	I1222 01:43:45.610591 1685746 logs.go:282] 0 containers: []
	W1222 01:43:45.610600 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:43:45.610607 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:43:45.610679 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:43:45.637028 1685746 cri.go:96] found id: ""
	I1222 01:43:45.637054 1685746 logs.go:282] 0 containers: []
	W1222 01:43:45.637064 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:43:45.637070 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:43:45.637141 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:43:45.662660 1685746 cri.go:96] found id: ""
	I1222 01:43:45.662740 1685746 logs.go:282] 0 containers: []
	W1222 01:43:45.662756 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:43:45.662767 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:43:45.662779 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:43:45.719167 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:43:45.719208 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:43:45.734405 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:43:45.734438 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:43:45.802645 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:43:45.794241   11226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:45.795082   11226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:45.796666   11226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:45.797286   11226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:45.798945   11226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:43:45.794241   11226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:45.795082   11226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:45.796666   11226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:45.797286   11226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:45.798945   11226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:43:45.802667 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:43:45.802680 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:43:45.829402 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:43:45.829439 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:43:48.362229 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:43:48.372648 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:43:48.372722 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:43:48.399816 1685746 cri.go:96] found id: ""
	I1222 01:43:48.399843 1685746 logs.go:282] 0 containers: []
	W1222 01:43:48.399852 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:43:48.399859 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:43:48.399922 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:43:48.424774 1685746 cri.go:96] found id: ""
	I1222 01:43:48.424800 1685746 logs.go:282] 0 containers: []
	W1222 01:43:48.424809 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:43:48.424816 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:43:48.424873 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:43:48.449402 1685746 cri.go:96] found id: ""
	I1222 01:43:48.449429 1685746 logs.go:282] 0 containers: []
	W1222 01:43:48.449438 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:43:48.449444 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:43:48.449501 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:43:48.481785 1685746 cri.go:96] found id: ""
	I1222 01:43:48.481811 1685746 logs.go:282] 0 containers: []
	W1222 01:43:48.481822 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:43:48.481828 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:43:48.481884 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:43:48.535392 1685746 cri.go:96] found id: ""
	I1222 01:43:48.535421 1685746 logs.go:282] 0 containers: []
	W1222 01:43:48.535429 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:43:48.535435 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:43:48.535495 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:43:48.581091 1685746 cri.go:96] found id: ""
	I1222 01:43:48.581119 1685746 logs.go:282] 0 containers: []
	W1222 01:43:48.581128 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:43:48.581135 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:43:48.581195 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:43:48.608115 1685746 cri.go:96] found id: ""
	I1222 01:43:48.608143 1685746 logs.go:282] 0 containers: []
	W1222 01:43:48.608152 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:43:48.608158 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:43:48.608222 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:43:48.634982 1685746 cri.go:96] found id: ""
	I1222 01:43:48.635007 1685746 logs.go:282] 0 containers: []
	W1222 01:43:48.635015 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:43:48.635024 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:43:48.635040 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:43:48.690980 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:43:48.691017 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:43:48.706101 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:43:48.706126 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:43:48.773880 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:43:48.764854   11337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:48.765671   11337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:48.767568   11337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:48.768241   11337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:48.769856   11337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:43:48.764854   11337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:48.765671   11337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:48.767568   11337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:48.768241   11337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:48.769856   11337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:43:48.773903 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:43:48.773915 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:43:48.798770 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:43:48.798805 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:43:51.326747 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:43:51.337244 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:43:51.337316 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:43:51.361650 1685746 cri.go:96] found id: ""
	I1222 01:43:51.361674 1685746 logs.go:282] 0 containers: []
	W1222 01:43:51.361685 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:43:51.361691 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:43:51.361752 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:43:51.387243 1685746 cri.go:96] found id: ""
	I1222 01:43:51.387267 1685746 logs.go:282] 0 containers: []
	W1222 01:43:51.387275 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:43:51.387282 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:43:51.387339 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:43:51.412051 1685746 cri.go:96] found id: ""
	I1222 01:43:51.412076 1685746 logs.go:282] 0 containers: []
	W1222 01:43:51.412085 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:43:51.412091 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:43:51.412152 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:43:51.442828 1685746 cri.go:96] found id: ""
	I1222 01:43:51.442855 1685746 logs.go:282] 0 containers: []
	W1222 01:43:51.442864 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:43:51.442871 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:43:51.442931 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:43:51.469084 1685746 cri.go:96] found id: ""
	I1222 01:43:51.469111 1685746 logs.go:282] 0 containers: []
	W1222 01:43:51.469120 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:43:51.469128 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:43:51.469196 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:43:51.505900 1685746 cri.go:96] found id: ""
	I1222 01:43:51.505931 1685746 logs.go:282] 0 containers: []
	W1222 01:43:51.505940 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:43:51.505947 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:43:51.506015 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:43:51.544756 1685746 cri.go:96] found id: ""
	I1222 01:43:51.544794 1685746 logs.go:282] 0 containers: []
	W1222 01:43:51.544803 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:43:51.544810 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:43:51.544881 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:43:51.595192 1685746 cri.go:96] found id: ""
	I1222 01:43:51.595274 1685746 logs.go:282] 0 containers: []
	W1222 01:43:51.595308 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:43:51.595330 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:43:51.595370 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:43:51.651780 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:43:51.651815 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:43:51.666583 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:43:51.666611 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:43:51.736962 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:43:51.728914   11451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:51.729300   11451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:51.730869   11451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:51.731559   11451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:51.732987   11451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:43:51.728914   11451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:51.729300   11451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:51.730869   11451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:51.731559   11451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:51.732987   11451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:43:51.736984 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:43:51.736997 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:43:51.763237 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:43:51.763272 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:43:54.292529 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:43:54.303313 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:43:54.303393 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:43:54.329228 1685746 cri.go:96] found id: ""
	I1222 01:43:54.329251 1685746 logs.go:282] 0 containers: []
	W1222 01:43:54.329260 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:43:54.329266 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:43:54.329325 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:43:54.353443 1685746 cri.go:96] found id: ""
	I1222 01:43:54.353478 1685746 logs.go:282] 0 containers: []
	W1222 01:43:54.353488 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:43:54.353495 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:43:54.353565 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:43:54.385463 1685746 cri.go:96] found id: ""
	I1222 01:43:54.385487 1685746 logs.go:282] 0 containers: []
	W1222 01:43:54.385496 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:43:54.385502 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:43:54.385571 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:43:54.413065 1685746 cri.go:96] found id: ""
	I1222 01:43:54.413135 1685746 logs.go:282] 0 containers: []
	W1222 01:43:54.413160 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:43:54.413209 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:43:54.413290 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:43:54.440350 1685746 cri.go:96] found id: ""
	I1222 01:43:54.440376 1685746 logs.go:282] 0 containers: []
	W1222 01:43:54.440385 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:43:54.440391 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:43:54.440469 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:43:54.469549 1685746 cri.go:96] found id: ""
	I1222 01:43:54.469583 1685746 logs.go:282] 0 containers: []
	W1222 01:43:54.469592 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:43:54.469599 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:43:54.469668 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:43:54.514637 1685746 cri.go:96] found id: ""
	I1222 01:43:54.514714 1685746 logs.go:282] 0 containers: []
	W1222 01:43:54.514738 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:43:54.514761 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:43:54.514876 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:43:54.546685 1685746 cri.go:96] found id: ""
	I1222 01:43:54.546708 1685746 logs.go:282] 0 containers: []
	W1222 01:43:54.546717 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:43:54.546726 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:43:54.546737 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:43:54.576240 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:43:54.576324 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:43:54.618824 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:43:54.618853 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:43:54.673867 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:43:54.673900 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:43:54.689028 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:43:54.689057 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:43:54.755999 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:43:54.747746   11573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:54.748476   11573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:54.750175   11573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:54.750710   11573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:54.752333   11573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:43:54.747746   11573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:54.748476   11573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:54.750175   11573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:54.750710   11573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:54.752333   11573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:43:57.257146 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:43:57.268025 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:43:57.268100 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:43:57.292709 1685746 cri.go:96] found id: ""
	I1222 01:43:57.292738 1685746 logs.go:282] 0 containers: []
	W1222 01:43:57.292748 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:43:57.292761 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:43:57.292826 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:43:57.321159 1685746 cri.go:96] found id: ""
	I1222 01:43:57.321186 1685746 logs.go:282] 0 containers: []
	W1222 01:43:57.321195 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:43:57.321201 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:43:57.321264 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:43:57.350573 1685746 cri.go:96] found id: ""
	I1222 01:43:57.350601 1685746 logs.go:282] 0 containers: []
	W1222 01:43:57.350611 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:43:57.350620 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:43:57.350682 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:43:57.380391 1685746 cri.go:96] found id: ""
	I1222 01:43:57.380425 1685746 logs.go:282] 0 containers: []
	W1222 01:43:57.380435 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:43:57.380441 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:43:57.380502 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:43:57.404977 1685746 cri.go:96] found id: ""
	I1222 01:43:57.405003 1685746 logs.go:282] 0 containers: []
	W1222 01:43:57.405012 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:43:57.405018 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:43:57.405080 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:43:57.431206 1685746 cri.go:96] found id: ""
	I1222 01:43:57.431234 1685746 logs.go:282] 0 containers: []
	W1222 01:43:57.431243 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:43:57.431250 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:43:57.431310 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:43:57.458352 1685746 cri.go:96] found id: ""
	I1222 01:43:57.458378 1685746 logs.go:282] 0 containers: []
	W1222 01:43:57.458387 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:43:57.458393 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:43:57.458454 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:43:57.487672 1685746 cri.go:96] found id: ""
	I1222 01:43:57.487700 1685746 logs.go:282] 0 containers: []
	W1222 01:43:57.487709 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:43:57.487718 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:43:57.487729 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:43:57.523843 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:43:57.523925 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:43:57.589400 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:43:57.589476 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:43:57.650987 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:43:57.651025 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:43:57.666115 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:43:57.666151 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:43:57.735484 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:43:57.726997   11685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:57.727531   11685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:57.729337   11685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:57.729718   11685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:57.731296   11685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:43:57.726997   11685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:57.727531   11685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:57.729337   11685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:57.729718   11685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:57.731296   11685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:44:00.237195 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:44:00.303116 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:44:00.303238 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:44:00.349571 1685746 cri.go:96] found id: ""
	I1222 01:44:00.349604 1685746 logs.go:282] 0 containers: []
	W1222 01:44:00.349614 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:44:00.349623 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:44:00.349691 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:44:00.397703 1685746 cri.go:96] found id: ""
	I1222 01:44:00.397728 1685746 logs.go:282] 0 containers: []
	W1222 01:44:00.397757 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:44:00.397772 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:44:00.397869 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:44:00.445846 1685746 cri.go:96] found id: ""
	I1222 01:44:00.445883 1685746 logs.go:282] 0 containers: []
	W1222 01:44:00.445891 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:44:00.445899 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:44:00.445975 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:44:00.481389 1685746 cri.go:96] found id: ""
	I1222 01:44:00.481433 1685746 logs.go:282] 0 containers: []
	W1222 01:44:00.481443 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:44:00.481451 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:44:00.481545 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:44:00.555281 1685746 cri.go:96] found id: ""
	I1222 01:44:00.555323 1685746 logs.go:282] 0 containers: []
	W1222 01:44:00.555333 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:44:00.555339 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:44:00.555417 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:44:00.610523 1685746 cri.go:96] found id: ""
	I1222 01:44:00.610554 1685746 logs.go:282] 0 containers: []
	W1222 01:44:00.610565 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:44:00.610572 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:44:00.610639 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:44:00.640211 1685746 cri.go:96] found id: ""
	I1222 01:44:00.640242 1685746 logs.go:282] 0 containers: []
	W1222 01:44:00.640252 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:44:00.640261 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:44:00.640334 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:44:00.672011 1685746 cri.go:96] found id: ""
	I1222 01:44:00.672037 1685746 logs.go:282] 0 containers: []
	W1222 01:44:00.672046 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:44:00.672055 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:44:00.672067 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:44:00.730908 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:44:00.730946 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:44:00.746205 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:44:00.746280 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:44:00.814946 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:44:00.806290   11785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:00.807084   11785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:00.808801   11785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:00.809407   11785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:00.810977   11785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:44:00.806290   11785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:00.807084   11785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:00.808801   11785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:00.809407   11785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:00.810977   11785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:44:00.814969 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:44:00.814982 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:44:00.841341 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:44:00.841376 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:44:03.372817 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:44:03.383361 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:44:03.383438 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:44:03.407536 1685746 cri.go:96] found id: ""
	I1222 01:44:03.407558 1685746 logs.go:282] 0 containers: []
	W1222 01:44:03.407566 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:44:03.407572 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:44:03.407631 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:44:03.433092 1685746 cri.go:96] found id: ""
	I1222 01:44:03.433120 1685746 logs.go:282] 0 containers: []
	W1222 01:44:03.433129 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:44:03.433135 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:44:03.433193 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:44:03.462721 1685746 cri.go:96] found id: ""
	I1222 01:44:03.462750 1685746 logs.go:282] 0 containers: []
	W1222 01:44:03.462759 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:44:03.462765 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:44:03.462824 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:44:03.512849 1685746 cri.go:96] found id: ""
	I1222 01:44:03.512871 1685746 logs.go:282] 0 containers: []
	W1222 01:44:03.512880 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:44:03.512887 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:44:03.512946 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:44:03.574191 1685746 cri.go:96] found id: ""
	I1222 01:44:03.574217 1685746 logs.go:282] 0 containers: []
	W1222 01:44:03.574226 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:44:03.574232 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:44:03.574299 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:44:03.600756 1685746 cri.go:96] found id: ""
	I1222 01:44:03.600785 1685746 logs.go:282] 0 containers: []
	W1222 01:44:03.600794 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:44:03.600801 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:44:03.600865 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:44:03.627524 1685746 cri.go:96] found id: ""
	I1222 01:44:03.627554 1685746 logs.go:282] 0 containers: []
	W1222 01:44:03.627564 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:44:03.627571 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:44:03.627632 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:44:03.652207 1685746 cri.go:96] found id: ""
	I1222 01:44:03.652230 1685746 logs.go:282] 0 containers: []
	W1222 01:44:03.652239 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:44:03.652248 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:44:03.652258 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:44:03.710392 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:44:03.710427 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:44:03.725850 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:44:03.725877 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:44:03.793641 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:44:03.785409   11901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:03.785944   11901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:03.787476   11901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:03.787975   11901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:03.789432   11901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:44:03.785409   11901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:03.785944   11901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:03.787476   11901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:03.787975   11901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:03.789432   11901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:44:03.793708 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:44:03.793725 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:44:03.819086 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:44:03.819122 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:44:06.350666 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:44:06.361704 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:44:06.361772 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:44:06.387959 1685746 cri.go:96] found id: ""
	I1222 01:44:06.387985 1685746 logs.go:282] 0 containers: []
	W1222 01:44:06.387994 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:44:06.388001 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:44:06.388063 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:44:06.420195 1685746 cri.go:96] found id: ""
	I1222 01:44:06.420229 1685746 logs.go:282] 0 containers: []
	W1222 01:44:06.420239 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:44:06.420245 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:44:06.420318 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:44:06.444201 1685746 cri.go:96] found id: ""
	I1222 01:44:06.444228 1685746 logs.go:282] 0 containers: []
	W1222 01:44:06.444237 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:44:06.444244 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:44:06.444326 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:44:06.469606 1685746 cri.go:96] found id: ""
	I1222 01:44:06.469635 1685746 logs.go:282] 0 containers: []
	W1222 01:44:06.469644 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:44:06.469650 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:44:06.469714 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:44:06.516673 1685746 cri.go:96] found id: ""
	I1222 01:44:06.516703 1685746 logs.go:282] 0 containers: []
	W1222 01:44:06.516712 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:44:06.516719 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:44:06.516783 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:44:06.554976 1685746 cri.go:96] found id: ""
	I1222 01:44:06.555004 1685746 logs.go:282] 0 containers: []
	W1222 01:44:06.555014 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:44:06.555020 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:44:06.555079 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:44:06.587406 1685746 cri.go:96] found id: ""
	I1222 01:44:06.587434 1685746 logs.go:282] 0 containers: []
	W1222 01:44:06.587443 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:44:06.587449 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:44:06.587511 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:44:06.620595 1685746 cri.go:96] found id: ""
	I1222 01:44:06.620623 1685746 logs.go:282] 0 containers: []
	W1222 01:44:06.620633 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:44:06.620642 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:44:06.620655 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:44:06.677532 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:44:06.677567 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:44:06.692910 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:44:06.692987 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:44:06.760398 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:44:06.750827   12014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:06.751480   12014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:06.753963   12014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:06.754997   12014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:06.755795   12014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:44:06.750827   12014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:06.751480   12014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:06.753963   12014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:06.754997   12014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:06.755795   12014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:44:06.760423 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:44:06.760436 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:44:06.785709 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:44:06.785743 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:44:09.314372 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:44:09.325259 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:44:09.325349 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:44:09.350687 1685746 cri.go:96] found id: ""
	I1222 01:44:09.350712 1685746 logs.go:282] 0 containers: []
	W1222 01:44:09.350726 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:44:09.350733 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:44:09.350794 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:44:09.376225 1685746 cri.go:96] found id: ""
	I1222 01:44:09.376252 1685746 logs.go:282] 0 containers: []
	W1222 01:44:09.376260 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:44:09.376267 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:44:09.376332 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:44:09.402898 1685746 cri.go:96] found id: ""
	I1222 01:44:09.402922 1685746 logs.go:282] 0 containers: []
	W1222 01:44:09.402931 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:44:09.402937 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:44:09.403008 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:44:09.428038 1685746 cri.go:96] found id: ""
	I1222 01:44:09.428066 1685746 logs.go:282] 0 containers: []
	W1222 01:44:09.428075 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:44:09.428082 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:44:09.428150 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:44:09.456772 1685746 cri.go:96] found id: ""
	I1222 01:44:09.456798 1685746 logs.go:282] 0 containers: []
	W1222 01:44:09.456806 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:44:09.456813 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:44:09.456871 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:44:09.484926 1685746 cri.go:96] found id: ""
	I1222 01:44:09.484953 1685746 logs.go:282] 0 containers: []
	W1222 01:44:09.484962 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:44:09.484968 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:44:09.485029 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:44:09.521247 1685746 cri.go:96] found id: ""
	I1222 01:44:09.521276 1685746 logs.go:282] 0 containers: []
	W1222 01:44:09.521285 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:44:09.521292 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:44:09.521361 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:44:09.559247 1685746 cri.go:96] found id: ""
	I1222 01:44:09.559283 1685746 logs.go:282] 0 containers: []
	W1222 01:44:09.559292 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:44:09.559301 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:44:09.559313 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:44:09.576452 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:44:09.576488 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:44:09.647498 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:44:09.639570   12128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:09.640313   12128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:09.641853   12128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:09.642396   12128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:09.643920   12128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:44:09.639570   12128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:09.640313   12128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:09.641853   12128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:09.642396   12128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:09.643920   12128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:44:09.647522 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:44:09.647535 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:44:09.672763 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:44:09.672799 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:44:09.703339 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:44:09.703367 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:44:12.258428 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:44:12.269740 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:44:12.269827 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:44:12.295142 1685746 cri.go:96] found id: ""
	I1222 01:44:12.295166 1685746 logs.go:282] 0 containers: []
	W1222 01:44:12.295174 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:44:12.295181 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:44:12.295239 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:44:12.324426 1685746 cri.go:96] found id: ""
	I1222 01:44:12.324453 1685746 logs.go:282] 0 containers: []
	W1222 01:44:12.324462 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:44:12.324468 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:44:12.324528 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:44:12.352908 1685746 cri.go:96] found id: ""
	I1222 01:44:12.352936 1685746 logs.go:282] 0 containers: []
	W1222 01:44:12.352945 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:44:12.352952 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:44:12.353016 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:44:12.382056 1685746 cri.go:96] found id: ""
	I1222 01:44:12.382106 1685746 logs.go:282] 0 containers: []
	W1222 01:44:12.382115 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:44:12.382122 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:44:12.382184 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:44:12.405895 1685746 cri.go:96] found id: ""
	I1222 01:44:12.405926 1685746 logs.go:282] 0 containers: []
	W1222 01:44:12.405935 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:44:12.405941 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:44:12.406063 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:44:12.432020 1685746 cri.go:96] found id: ""
	I1222 01:44:12.432046 1685746 logs.go:282] 0 containers: []
	W1222 01:44:12.432055 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:44:12.432062 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:44:12.432167 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:44:12.460268 1685746 cri.go:96] found id: ""
	I1222 01:44:12.460316 1685746 logs.go:282] 0 containers: []
	W1222 01:44:12.460325 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:44:12.460332 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:44:12.460391 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:44:12.510214 1685746 cri.go:96] found id: ""
	I1222 01:44:12.510243 1685746 logs.go:282] 0 containers: []
	W1222 01:44:12.510252 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:44:12.510261 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:44:12.510281 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:44:12.574866 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:44:12.574895 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:44:12.630459 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:44:12.630495 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:44:12.645639 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:44:12.645667 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:44:12.715658 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:44:12.707654   12252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:12.708218   12252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:12.709705   12252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:12.710254   12252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:12.711749   12252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:44:12.707654   12252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:12.708218   12252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:12.709705   12252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:12.710254   12252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:12.711749   12252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:44:12.715678 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:44:12.715691 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:44:15.242028 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:44:15.253031 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:44:15.253105 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:44:15.283751 1685746 cri.go:96] found id: ""
	I1222 01:44:15.283784 1685746 logs.go:282] 0 containers: []
	W1222 01:44:15.283794 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:44:15.283800 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:44:15.283865 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:44:15.308803 1685746 cri.go:96] found id: ""
	I1222 01:44:15.308830 1685746 logs.go:282] 0 containers: []
	W1222 01:44:15.308840 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:44:15.308846 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:44:15.308911 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:44:15.334334 1685746 cri.go:96] found id: ""
	I1222 01:44:15.334362 1685746 logs.go:282] 0 containers: []
	W1222 01:44:15.334371 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:44:15.334378 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:44:15.334437 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:44:15.363819 1685746 cri.go:96] found id: ""
	I1222 01:44:15.363843 1685746 logs.go:282] 0 containers: []
	W1222 01:44:15.363852 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:44:15.363859 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:44:15.363920 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:44:15.389166 1685746 cri.go:96] found id: ""
	I1222 01:44:15.389194 1685746 logs.go:282] 0 containers: []
	W1222 01:44:15.389203 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:44:15.389211 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:44:15.389275 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:44:15.418948 1685746 cri.go:96] found id: ""
	I1222 01:44:15.419022 1685746 logs.go:282] 0 containers: []
	W1222 01:44:15.419035 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:44:15.419042 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:44:15.419135 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:44:15.446013 1685746 cri.go:96] found id: ""
	I1222 01:44:15.446105 1685746 logs.go:282] 0 containers: []
	W1222 01:44:15.446130 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:44:15.446162 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:44:15.446236 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:44:15.470779 1685746 cri.go:96] found id: ""
	I1222 01:44:15.470806 1685746 logs.go:282] 0 containers: []
	W1222 01:44:15.470815 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:44:15.470825 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:44:15.470857 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:44:15.551154 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:44:15.551246 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:44:15.578834 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:44:15.578861 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:44:15.644949 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:44:15.637144   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:15.637770   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:15.639327   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:15.639800   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:15.641301   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:44:15.637144   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:15.637770   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:15.639327   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:15.639800   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:15.641301   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:44:15.644969 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:44:15.644981 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:44:15.670551 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:44:15.670585 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:44:18.202679 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:44:18.213735 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:44:18.213812 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:44:18.239304 1685746 cri.go:96] found id: ""
	I1222 01:44:18.239327 1685746 logs.go:282] 0 containers: []
	W1222 01:44:18.239336 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:44:18.239342 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:44:18.239401 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:44:18.265064 1685746 cri.go:96] found id: ""
	I1222 01:44:18.265089 1685746 logs.go:282] 0 containers: []
	W1222 01:44:18.265098 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:44:18.265104 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:44:18.265165 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:44:18.290606 1685746 cri.go:96] found id: ""
	I1222 01:44:18.290642 1685746 logs.go:282] 0 containers: []
	W1222 01:44:18.290652 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:44:18.290659 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:44:18.290734 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:44:18.317208 1685746 cri.go:96] found id: ""
	I1222 01:44:18.317231 1685746 logs.go:282] 0 containers: []
	W1222 01:44:18.317240 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:44:18.317246 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:44:18.317306 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:44:18.342186 1685746 cri.go:96] found id: ""
	I1222 01:44:18.342207 1685746 logs.go:282] 0 containers: []
	W1222 01:44:18.342216 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:44:18.342222 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:44:18.342280 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:44:18.367436 1685746 cri.go:96] found id: ""
	I1222 01:44:18.367468 1685746 logs.go:282] 0 containers: []
	W1222 01:44:18.367477 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:44:18.367484 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:44:18.367572 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:44:18.392591 1685746 cri.go:96] found id: ""
	I1222 01:44:18.392616 1685746 logs.go:282] 0 containers: []
	W1222 01:44:18.392625 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:44:18.392632 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:44:18.392691 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:44:18.417782 1685746 cri.go:96] found id: ""
	I1222 01:44:18.417820 1685746 logs.go:282] 0 containers: []
	W1222 01:44:18.417829 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:44:18.417838 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:44:18.417850 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:44:18.475370 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:44:18.475402 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:44:18.496693 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:44:18.496722 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:44:18.602667 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:44:18.591806   12468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:18.592517   12468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:18.594327   12468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:18.595360   12468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:18.595864   12468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:44:18.591806   12468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:18.592517   12468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:18.594327   12468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:18.595360   12468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:18.595864   12468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:44:18.602690 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:44:18.602704 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:44:18.628074 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:44:18.628158 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:44:21.160991 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:44:21.171843 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:44:21.171925 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:44:21.197008 1685746 cri.go:96] found id: ""
	I1222 01:44:21.197035 1685746 logs.go:282] 0 containers: []
	W1222 01:44:21.197045 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:44:21.197051 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:44:21.197111 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:44:21.222701 1685746 cri.go:96] found id: ""
	I1222 01:44:21.222731 1685746 logs.go:282] 0 containers: []
	W1222 01:44:21.222740 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:44:21.222747 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:44:21.222812 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:44:21.247835 1685746 cri.go:96] found id: ""
	I1222 01:44:21.247858 1685746 logs.go:282] 0 containers: []
	W1222 01:44:21.247867 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:44:21.247874 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:44:21.247932 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:44:21.272366 1685746 cri.go:96] found id: ""
	I1222 01:44:21.272400 1685746 logs.go:282] 0 containers: []
	W1222 01:44:21.272411 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:44:21.272418 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:44:21.272483 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:44:21.297348 1685746 cri.go:96] found id: ""
	I1222 01:44:21.297375 1685746 logs.go:282] 0 containers: []
	W1222 01:44:21.297384 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:44:21.297391 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:44:21.297449 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:44:21.321989 1685746 cri.go:96] found id: ""
	I1222 01:44:21.322013 1685746 logs.go:282] 0 containers: []
	W1222 01:44:21.322022 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:44:21.322029 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:44:21.322112 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:44:21.350652 1685746 cri.go:96] found id: ""
	I1222 01:44:21.350677 1685746 logs.go:282] 0 containers: []
	W1222 01:44:21.350685 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:44:21.350691 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:44:21.350754 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:44:21.382678 1685746 cri.go:96] found id: ""
	I1222 01:44:21.382748 1685746 logs.go:282] 0 containers: []
	W1222 01:44:21.382773 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:44:21.382791 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:44:21.382804 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:44:21.438683 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:44:21.438718 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:44:21.453712 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:44:21.453745 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:44:21.571593 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:44:21.558586   12575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:21.559284   12575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:21.561942   12575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:21.562734   12575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:21.565012   12575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:44:21.558586   12575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:21.559284   12575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:21.561942   12575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:21.562734   12575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:21.565012   12575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:44:21.571621 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:44:21.571635 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:44:21.598254 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:44:21.598290 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:44:24.133046 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:44:24.144639 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:44:24.144716 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:44:24.170797 1685746 cri.go:96] found id: ""
	I1222 01:44:24.170821 1685746 logs.go:282] 0 containers: []
	W1222 01:44:24.170830 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:44:24.170838 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:44:24.170901 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:44:24.198790 1685746 cri.go:96] found id: ""
	I1222 01:44:24.198813 1685746 logs.go:282] 0 containers: []
	W1222 01:44:24.198822 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:44:24.198830 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:44:24.198892 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:44:24.223222 1685746 cri.go:96] found id: ""
	I1222 01:44:24.223245 1685746 logs.go:282] 0 containers: []
	W1222 01:44:24.223253 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:44:24.223259 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:44:24.223317 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:44:24.248490 1685746 cri.go:96] found id: ""
	I1222 01:44:24.248573 1685746 logs.go:282] 0 containers: []
	W1222 01:44:24.248590 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:44:24.248598 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:44:24.248678 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:44:24.273541 1685746 cri.go:96] found id: ""
	I1222 01:44:24.273570 1685746 logs.go:282] 0 containers: []
	W1222 01:44:24.273578 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:44:24.273585 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:44:24.273647 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:44:24.298819 1685746 cri.go:96] found id: ""
	I1222 01:44:24.298847 1685746 logs.go:282] 0 containers: []
	W1222 01:44:24.298856 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:44:24.298863 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:44:24.298921 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:44:24.324215 1685746 cri.go:96] found id: ""
	I1222 01:44:24.324316 1685746 logs.go:282] 0 containers: []
	W1222 01:44:24.324334 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:44:24.324341 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:44:24.324420 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:44:24.349700 1685746 cri.go:96] found id: ""
	I1222 01:44:24.349727 1685746 logs.go:282] 0 containers: []
	W1222 01:44:24.349736 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:44:24.349745 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:44:24.349756 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:44:24.405384 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:44:24.405419 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:44:24.420496 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:44:24.420524 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:44:24.481353 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:44:24.473051   12686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:24.473892   12686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:24.475622   12686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:24.475956   12686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:24.477524   12686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:44:24.473051   12686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:24.473892   12686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:24.475622   12686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:24.475956   12686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:24.477524   12686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:44:24.481378 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:44:24.481392 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:44:24.507731 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:44:24.508076 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:44:27.051455 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:44:27.062328 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:44:27.062402 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:44:27.088764 1685746 cri.go:96] found id: ""
	I1222 01:44:27.088786 1685746 logs.go:282] 0 containers: []
	W1222 01:44:27.088795 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:44:27.088801 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:44:27.088859 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:44:27.113929 1685746 cri.go:96] found id: ""
	I1222 01:44:27.113951 1685746 logs.go:282] 0 containers: []
	W1222 01:44:27.113959 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:44:27.113966 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:44:27.114027 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:44:27.139537 1685746 cri.go:96] found id: ""
	I1222 01:44:27.139562 1685746 logs.go:282] 0 containers: []
	W1222 01:44:27.139577 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:44:27.139584 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:44:27.139645 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:44:27.164769 1685746 cri.go:96] found id: ""
	I1222 01:44:27.164792 1685746 logs.go:282] 0 containers: []
	W1222 01:44:27.164800 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:44:27.164807 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:44:27.164867 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:44:27.190396 1685746 cri.go:96] found id: ""
	I1222 01:44:27.190424 1685746 logs.go:282] 0 containers: []
	W1222 01:44:27.190433 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:44:27.190440 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:44:27.190503 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:44:27.215574 1685746 cri.go:96] found id: ""
	I1222 01:44:27.215599 1685746 logs.go:282] 0 containers: []
	W1222 01:44:27.215608 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:44:27.215616 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:44:27.215684 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:44:27.246139 1685746 cri.go:96] found id: ""
	I1222 01:44:27.246162 1685746 logs.go:282] 0 containers: []
	W1222 01:44:27.246172 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:44:27.246178 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:44:27.246239 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:44:27.272153 1685746 cri.go:96] found id: ""
	I1222 01:44:27.272177 1685746 logs.go:282] 0 containers: []
	W1222 01:44:27.272185 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:44:27.272193 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:44:27.272205 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:44:27.303523 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:44:27.303552 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:44:27.363938 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:44:27.363985 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:44:27.380130 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:44:27.380163 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:44:27.443113 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:44:27.434806   12809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:27.435316   12809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:27.437169   12809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:27.437629   12809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:27.439083   12809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:44:27.434806   12809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:27.435316   12809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:27.437169   12809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:27.437629   12809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:27.439083   12809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:44:27.443137 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:44:27.443149 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:44:29.969751 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:44:29.980564 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:44:29.980638 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:44:30.027489 1685746 cri.go:96] found id: ""
	I1222 01:44:30.027515 1685746 logs.go:282] 0 containers: []
	W1222 01:44:30.027524 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:44:30.027532 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:44:30.027604 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:44:30.063116 1685746 cri.go:96] found id: ""
	I1222 01:44:30.063142 1685746 logs.go:282] 0 containers: []
	W1222 01:44:30.063152 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:44:30.063160 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:44:30.063229 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:44:30.111428 1685746 cri.go:96] found id: ""
	I1222 01:44:30.111455 1685746 logs.go:282] 0 containers: []
	W1222 01:44:30.111466 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:44:30.111473 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:44:30.111543 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:44:30.142346 1685746 cri.go:96] found id: ""
	I1222 01:44:30.142381 1685746 logs.go:282] 0 containers: []
	W1222 01:44:30.142391 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:44:30.142406 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:44:30.142499 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:44:30.171044 1685746 cri.go:96] found id: ""
	I1222 01:44:30.171068 1685746 logs.go:282] 0 containers: []
	W1222 01:44:30.171078 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:44:30.171084 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:44:30.171150 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:44:30.206010 1685746 cri.go:96] found id: ""
	I1222 01:44:30.206034 1685746 logs.go:282] 0 containers: []
	W1222 01:44:30.206044 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:44:30.206051 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:44:30.206225 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:44:30.235230 1685746 cri.go:96] found id: ""
	I1222 01:44:30.235255 1685746 logs.go:282] 0 containers: []
	W1222 01:44:30.235264 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:44:30.235272 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:44:30.235404 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:44:30.262624 1685746 cri.go:96] found id: ""
	I1222 01:44:30.262651 1685746 logs.go:282] 0 containers: []
	W1222 01:44:30.262661 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:44:30.262671 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:44:30.262689 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:44:30.320010 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:44:30.320048 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:44:30.336273 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:44:30.336303 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:44:30.407334 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:44:30.398947   12913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:30.399729   12913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:30.401509   12913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:30.402016   12913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:30.403552   12913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:44:30.398947   12913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:30.399729   12913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:30.401509   12913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:30.402016   12913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:30.403552   12913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:44:30.407358 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:44:30.407373 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:44:30.432976 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:44:30.433010 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:44:32.965996 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:44:32.976893 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:44:32.976972 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:44:33.004108 1685746 cri.go:96] found id: ""
	I1222 01:44:33.004138 1685746 logs.go:282] 0 containers: []
	W1222 01:44:33.004149 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:44:33.004157 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:44:33.004293 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:44:33.032305 1685746 cri.go:96] found id: ""
	I1222 01:44:33.032333 1685746 logs.go:282] 0 containers: []
	W1222 01:44:33.032343 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:44:33.032350 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:44:33.032410 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:44:33.060572 1685746 cri.go:96] found id: ""
	I1222 01:44:33.060600 1685746 logs.go:282] 0 containers: []
	W1222 01:44:33.060610 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:44:33.060616 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:44:33.060680 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:44:33.086067 1685746 cri.go:96] found id: ""
	I1222 01:44:33.086112 1685746 logs.go:282] 0 containers: []
	W1222 01:44:33.086122 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:44:33.086129 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:44:33.086188 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:44:33.112283 1685746 cri.go:96] found id: ""
	I1222 01:44:33.112310 1685746 logs.go:282] 0 containers: []
	W1222 01:44:33.112320 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:44:33.112326 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:44:33.112390 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:44:33.143337 1685746 cri.go:96] found id: ""
	I1222 01:44:33.143363 1685746 logs.go:282] 0 containers: []
	W1222 01:44:33.143372 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:44:33.143379 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:44:33.143441 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:44:33.169224 1685746 cri.go:96] found id: ""
	I1222 01:44:33.169250 1685746 logs.go:282] 0 containers: []
	W1222 01:44:33.169259 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:44:33.169267 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:44:33.169327 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:44:33.198401 1685746 cri.go:96] found id: ""
	I1222 01:44:33.198422 1685746 logs.go:282] 0 containers: []
	W1222 01:44:33.198431 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:44:33.198440 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:44:33.198451 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:44:33.256328 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:44:33.256364 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:44:33.271899 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:44:33.271930 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:44:33.338753 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:44:33.330214   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:33.330943   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:33.332678   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:33.333280   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:33.335093   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:44:33.330214   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:33.330943   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:33.332678   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:33.333280   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:33.335093   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:44:33.338786 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:44:33.338800 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:44:33.364007 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:44:33.364042 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:44:35.895269 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:44:35.906191 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:44:35.906266 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:44:35.931271 1685746 cri.go:96] found id: ""
	I1222 01:44:35.931297 1685746 logs.go:282] 0 containers: []
	W1222 01:44:35.931306 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:44:35.931313 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:44:35.931372 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:44:35.958259 1685746 cri.go:96] found id: ""
	I1222 01:44:35.958289 1685746 logs.go:282] 0 containers: []
	W1222 01:44:35.958298 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:44:35.958312 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:44:35.958414 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:44:35.982836 1685746 cri.go:96] found id: ""
	I1222 01:44:35.982861 1685746 logs.go:282] 0 containers: []
	W1222 01:44:35.982871 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:44:35.982877 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:44:35.982937 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:44:36.012610 1685746 cri.go:96] found id: ""
	I1222 01:44:36.012642 1685746 logs.go:282] 0 containers: []
	W1222 01:44:36.012652 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:44:36.012659 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:44:36.012739 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:44:36.039888 1685746 cri.go:96] found id: ""
	I1222 01:44:36.039914 1685746 logs.go:282] 0 containers: []
	W1222 01:44:36.039924 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:44:36.039933 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:44:36.039995 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:44:36.070115 1685746 cri.go:96] found id: ""
	I1222 01:44:36.070144 1685746 logs.go:282] 0 containers: []
	W1222 01:44:36.070153 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:44:36.070160 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:44:36.070220 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:44:36.095790 1685746 cri.go:96] found id: ""
	I1222 01:44:36.095871 1685746 logs.go:282] 0 containers: []
	W1222 01:44:36.095887 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:44:36.095896 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:44:36.095967 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:44:36.122442 1685746 cri.go:96] found id: ""
	I1222 01:44:36.122519 1685746 logs.go:282] 0 containers: []
	W1222 01:44:36.122531 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:44:36.122570 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:44:36.122585 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:44:36.151370 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:44:36.151396 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:44:36.206896 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:44:36.206937 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:44:36.222382 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:44:36.222413 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:44:36.290888 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:44:36.282237   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:36.282881   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:36.284438   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:36.284963   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:36.286512   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:44:36.282237   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:36.282881   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:36.284438   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:36.284963   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:36.286512   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:44:36.290912 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:44:36.290927 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:44:38.822770 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:44:38.837195 1685746 out.go:203] 
	W1222 01:44:38.840003 1685746 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1222 01:44:38.840044 1685746 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1222 01:44:38.840057 1685746 out.go:285] * Related issues:
	W1222 01:44:38.840077 1685746 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1222 01:44:38.840096 1685746 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1222 01:44:38.842944 1685746 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 22 01:38:37 newest-cni-869293 containerd[555]: time="2025-12-22T01:38:37.471912430Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 22 01:38:37 newest-cni-869293 containerd[555]: time="2025-12-22T01:38:37.471984422Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 22 01:38:37 newest-cni-869293 containerd[555]: time="2025-12-22T01:38:37.472090343Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 22 01:38:37 newest-cni-869293 containerd[555]: time="2025-12-22T01:38:37.472165782Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 22 01:38:37 newest-cni-869293 containerd[555]: time="2025-12-22T01:38:37.472253611Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 22 01:38:37 newest-cni-869293 containerd[555]: time="2025-12-22T01:38:37.472320967Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 22 01:38:37 newest-cni-869293 containerd[555]: time="2025-12-22T01:38:37.472396340Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 22 01:38:37 newest-cni-869293 containerd[555]: time="2025-12-22T01:38:37.472469810Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 22 01:38:37 newest-cni-869293 containerd[555]: time="2025-12-22T01:38:37.472535599Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 22 01:38:37 newest-cni-869293 containerd[555]: time="2025-12-22T01:38:37.472630435Z" level=info msg="Connect containerd service"
	Dec 22 01:38:37 newest-cni-869293 containerd[555]: time="2025-12-22T01:38:37.472973486Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 22 01:38:37 newest-cni-869293 containerd[555]: time="2025-12-22T01:38:37.473627974Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 22 01:38:37 newest-cni-869293 containerd[555]: time="2025-12-22T01:38:37.488429715Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 22 01:38:37 newest-cni-869293 containerd[555]: time="2025-12-22T01:38:37.488493453Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 22 01:38:37 newest-cni-869293 containerd[555]: time="2025-12-22T01:38:37.488524985Z" level=info msg="Start subscribing containerd event"
	Dec 22 01:38:37 newest-cni-869293 containerd[555]: time="2025-12-22T01:38:37.488575940Z" level=info msg="Start recovering state"
	Dec 22 01:38:37 newest-cni-869293 containerd[555]: time="2025-12-22T01:38:37.527785198Z" level=info msg="Start event monitor"
	Dec 22 01:38:37 newest-cni-869293 containerd[555]: time="2025-12-22T01:38:37.527839713Z" level=info msg="Start cni network conf syncer for default"
	Dec 22 01:38:37 newest-cni-869293 containerd[555]: time="2025-12-22T01:38:37.527850864Z" level=info msg="Start streaming server"
	Dec 22 01:38:37 newest-cni-869293 containerd[555]: time="2025-12-22T01:38:37.527863213Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 22 01:38:37 newest-cni-869293 containerd[555]: time="2025-12-22T01:38:37.527915176Z" level=info msg="runtime interface starting up..."
	Dec 22 01:38:37 newest-cni-869293 containerd[555]: time="2025-12-22T01:38:37.527922561Z" level=info msg="starting plugins..."
	Dec 22 01:38:37 newest-cni-869293 containerd[555]: time="2025-12-22T01:38:37.527953700Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 22 01:38:37 newest-cni-869293 containerd[555]: time="2025-12-22T01:38:37.528090677Z" level=info msg="containerd successfully booted in 0.081452s"
	Dec 22 01:38:37 newest-cni-869293 systemd[1]: Started containerd.service - containerd container runtime.
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:44:41.874234   13309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:41.874806   13309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:41.876597   13309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:41.877247   13309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:41.879043   13309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec22 00:03] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 01:44:41 up 1 day,  8:27,  0 user,  load average: 1.50, 0.97, 1.41
	Linux newest-cni-869293 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 22 01:44:38 newest-cni-869293 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 01:44:39 newest-cni-869293 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 481.
	Dec 22 01:44:39 newest-cni-869293 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:44:39 newest-cni-869293 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:44:39 newest-cni-869293 kubelet[13185]: E1222 01:44:39.586718   13185 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 01:44:39 newest-cni-869293 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 01:44:39 newest-cni-869293 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 01:44:40 newest-cni-869293 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 482.
	Dec 22 01:44:40 newest-cni-869293 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:44:40 newest-cni-869293 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:44:40 newest-cni-869293 kubelet[13192]: E1222 01:44:40.319493   13192 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 01:44:40 newest-cni-869293 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 01:44:40 newest-cni-869293 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 01:44:40 newest-cni-869293 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 483.
	Dec 22 01:44:40 newest-cni-869293 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:44:40 newest-cni-869293 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:44:41 newest-cni-869293 kubelet[13211]: E1222 01:44:41.100645   13211 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 01:44:41 newest-cni-869293 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 01:44:41 newest-cni-869293 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 01:44:41 newest-cni-869293 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 484.
	Dec 22 01:44:41 newest-cni-869293 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:44:41 newest-cni-869293 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:44:41 newest-cni-869293 kubelet[13300]: E1222 01:44:41.816464   13300 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 01:44:41 newest-cni-869293 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 01:44:41 newest-cni-869293 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-869293 -n newest-cni-869293
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-869293 -n newest-cni-869293: exit status 2 (361.005333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "newest-cni-869293" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (372.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1222 01:43:07.826402 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-722318/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1222 01:44:13.742755 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/old-k8s-version-433815/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1222 01:44:26.767134 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/default-k8s-diff-port-778490/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1222 01:45:36.789641 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/old-k8s-version-433815/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1222 01:45:49.809914 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/default-k8s-diff-port-778490/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1222 01:45:56.757166 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
I1222 01:45:58.069955 1396864 config.go:182] Loaded profile config "auto-892179": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1222 01:46:29.154279 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/addons-984861/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
I1222 01:47:29.031113 1396864 config.go:182] Loaded profile config "flannel-892179": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1222 01:48:07.826116 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-722318/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1222 01:49:13.742817 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/old-k8s-version-433815/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1222 01:49:26.767136 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/default-k8s-diff-port-778490/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
I1222 01:50:44.065776 1396864 config.go:182] Loaded profile config "custom-flannel-892179": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1222 01:50:56.757414 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1222 01:50:58.307409 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/auto-892179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:50:58.312690 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/auto-892179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:50:58.322983 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/auto-892179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:50:58.343529 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/auto-892179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:50:58.383824 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/auto-892179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:50:58.464138 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/auto-892179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:50:58.624617 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/auto-892179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1222 01:50:58.945389 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/auto-892179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:50:59.586463 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/auto-892179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1222 01:51:00.867641 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/auto-892179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1222 01:51:03.427849 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/auto-892179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1222 01:51:08.548052 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/auto-892179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1222 01:51:12.214686 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/addons-984861/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1222 01:51:18.788440 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/auto-892179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1222 01:51:29.154168 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/addons-984861/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
start_stop_delete_test.go:272: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-154186 -n no-preload-154186
start_stop_delete_test.go:272: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-154186 -n no-preload-154186: exit status 2 (406.967321ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:272: status error: exit status 2 (may be ok)
start_stop_delete_test.go:272: "no-preload-154186" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-154186
helpers_test.go:244: (dbg) docker inspect no-preload-154186:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "66e4ffc32ac4d9a593d1b758510a0770f54cbfe4fc8bd1c7d5d5625196ae8de7",
	        "Created": "2025-12-22T01:26:03.981102368Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1681449,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-22T01:36:23.503691575Z",
	            "FinishedAt": "2025-12-22T01:36:22.112804811Z"
	        },
	        "Image": "sha256:065a636b8735485f57df2b02ed6532902f189a9c5dc304ae0ae68a778e1c9b2c",
	        "ResolvConfPath": "/var/lib/docker/containers/66e4ffc32ac4d9a593d1b758510a0770f54cbfe4fc8bd1c7d5d5625196ae8de7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/66e4ffc32ac4d9a593d1b758510a0770f54cbfe4fc8bd1c7d5d5625196ae8de7/hostname",
	        "HostsPath": "/var/lib/docker/containers/66e4ffc32ac4d9a593d1b758510a0770f54cbfe4fc8bd1c7d5d5625196ae8de7/hosts",
	        "LogPath": "/var/lib/docker/containers/66e4ffc32ac4d9a593d1b758510a0770f54cbfe4fc8bd1c7d5d5625196ae8de7/66e4ffc32ac4d9a593d1b758510a0770f54cbfe4fc8bd1c7d5d5625196ae8de7-json.log",
	        "Name": "/no-preload-154186",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-154186:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-154186",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "66e4ffc32ac4d9a593d1b758510a0770f54cbfe4fc8bd1c7d5d5625196ae8de7",
	                "LowerDir": "/var/lib/docker/overlay2/c8373d16a8e2d6ece6c54ed2fe561f951d426d306116bd1c2373e51241872490-init/diff:/var/lib/docker/overlay2/fdfc0dccbb3b6766b7981a5669907f62e120bbe774767ccbee34e6115374625f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c8373d16a8e2d6ece6c54ed2fe561f951d426d306116bd1c2373e51241872490/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c8373d16a8e2d6ece6c54ed2fe561f951d426d306116bd1c2373e51241872490/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c8373d16a8e2d6ece6c54ed2fe561f951d426d306116bd1c2373e51241872490/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-154186",
	                "Source": "/var/lib/docker/volumes/no-preload-154186/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-154186",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-154186",
	                "name.minikube.sigs.k8s.io": "no-preload-154186",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "79de2efae0fd51e067446e17772315f189c10d5767e33af4ebd104752f65737c",
	            "SandboxKey": "/var/run/docker/netns/79de2efae0fd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38702"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38703"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38706"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38704"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38705"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-154186": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "be:c4:4d:4a:c4:7b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "16496b4bf71c897a797bd98394f7b6821dd2bd1d65c81e9f1efd0fcd1f14d1a5",
	                    "EndpointID": "47af66f9da650982ed99a47d4f083adda357be5441350f59f6280b70b837f98e",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-154186",
	                        "66e4ffc32ac4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-154186 -n no-preload-154186
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-154186 -n no-preload-154186: exit status 2 (436.294068ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-154186 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p no-preload-154186 logs -n 25: (1.015869311s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                      ARGS                                                                      │        PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p custom-flannel-892179 sudo systemctl status kubelet --all --full --no-pager                                                                 │ custom-flannel-892179 │ jenkins │ v1.37.0 │ 22 Dec 25 01:51 UTC │ 22 Dec 25 01:51 UTC │
	│ ssh     │ -p custom-flannel-892179 sudo systemctl cat kubelet --no-pager                                                                                 │ custom-flannel-892179 │ jenkins │ v1.37.0 │ 22 Dec 25 01:51 UTC │ 22 Dec 25 01:51 UTC │
	│ ssh     │ -p custom-flannel-892179 sudo journalctl -xeu kubelet --all --full --no-pager                                                                  │ custom-flannel-892179 │ jenkins │ v1.37.0 │ 22 Dec 25 01:51 UTC │ 22 Dec 25 01:51 UTC │
	│ ssh     │ -p custom-flannel-892179 sudo cat /etc/kubernetes/kubelet.conf                                                                                 │ custom-flannel-892179 │ jenkins │ v1.37.0 │ 22 Dec 25 01:51 UTC │ 22 Dec 25 01:51 UTC │
	│ ssh     │ -p custom-flannel-892179 sudo cat /var/lib/kubelet/config.yaml                                                                                 │ custom-flannel-892179 │ jenkins │ v1.37.0 │ 22 Dec 25 01:51 UTC │ 22 Dec 25 01:51 UTC │
	│ ssh     │ -p custom-flannel-892179 sudo systemctl status docker --all --full --no-pager                                                                  │ custom-flannel-892179 │ jenkins │ v1.37.0 │ 22 Dec 25 01:51 UTC │                     │
	│ ssh     │ -p custom-flannel-892179 sudo systemctl cat docker --no-pager                                                                                  │ custom-flannel-892179 │ jenkins │ v1.37.0 │ 22 Dec 25 01:51 UTC │ 22 Dec 25 01:51 UTC │
	│ ssh     │ -p custom-flannel-892179 sudo cat /etc/docker/daemon.json                                                                                      │ custom-flannel-892179 │ jenkins │ v1.37.0 │ 22 Dec 25 01:51 UTC │                     │
	│ ssh     │ -p custom-flannel-892179 sudo docker system info                                                                                               │ custom-flannel-892179 │ jenkins │ v1.37.0 │ 22 Dec 25 01:51 UTC │                     │
	│ ssh     │ -p custom-flannel-892179 sudo systemctl status cri-docker --all --full --no-pager                                                              │ custom-flannel-892179 │ jenkins │ v1.37.0 │ 22 Dec 25 01:51 UTC │                     │
	│ ssh     │ -p custom-flannel-892179 sudo systemctl cat cri-docker --no-pager                                                                              │ custom-flannel-892179 │ jenkins │ v1.37.0 │ 22 Dec 25 01:51 UTC │ 22 Dec 25 01:51 UTC │
	│ ssh     │ -p custom-flannel-892179 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                         │ custom-flannel-892179 │ jenkins │ v1.37.0 │ 22 Dec 25 01:51 UTC │                     │
	│ ssh     │ -p custom-flannel-892179 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                   │ custom-flannel-892179 │ jenkins │ v1.37.0 │ 22 Dec 25 01:51 UTC │ 22 Dec 25 01:51 UTC │
	│ ssh     │ -p custom-flannel-892179 sudo cri-dockerd --version                                                                                            │ custom-flannel-892179 │ jenkins │ v1.37.0 │ 22 Dec 25 01:51 UTC │ 22 Dec 25 01:51 UTC │
	│ ssh     │ -p custom-flannel-892179 sudo systemctl status containerd --all --full --no-pager                                                              │ custom-flannel-892179 │ jenkins │ v1.37.0 │ 22 Dec 25 01:51 UTC │ 22 Dec 25 01:51 UTC │
	│ ssh     │ -p custom-flannel-892179 sudo systemctl cat containerd --no-pager                                                                              │ custom-flannel-892179 │ jenkins │ v1.37.0 │ 22 Dec 25 01:51 UTC │ 22 Dec 25 01:51 UTC │
	│ ssh     │ -p custom-flannel-892179 sudo cat /lib/systemd/system/containerd.service                                                                       │ custom-flannel-892179 │ jenkins │ v1.37.0 │ 22 Dec 25 01:51 UTC │ 22 Dec 25 01:51 UTC │
	│ ssh     │ -p custom-flannel-892179 sudo cat /etc/containerd/config.toml                                                                                  │ custom-flannel-892179 │ jenkins │ v1.37.0 │ 22 Dec 25 01:51 UTC │ 22 Dec 25 01:51 UTC │
	│ ssh     │ -p custom-flannel-892179 sudo containerd config dump                                                                                           │ custom-flannel-892179 │ jenkins │ v1.37.0 │ 22 Dec 25 01:51 UTC │ 22 Dec 25 01:51 UTC │
	│ ssh     │ -p custom-flannel-892179 sudo systemctl status crio --all --full --no-pager                                                                    │ custom-flannel-892179 │ jenkins │ v1.37.0 │ 22 Dec 25 01:51 UTC │                     │
	│ ssh     │ -p custom-flannel-892179 sudo systemctl cat crio --no-pager                                                                                    │ custom-flannel-892179 │ jenkins │ v1.37.0 │ 22 Dec 25 01:51 UTC │ 22 Dec 25 01:51 UTC │
	│ ssh     │ -p custom-flannel-892179 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                          │ custom-flannel-892179 │ jenkins │ v1.37.0 │ 22 Dec 25 01:51 UTC │ 22 Dec 25 01:51 UTC │
	│ ssh     │ -p custom-flannel-892179 sudo crio config                                                                                                      │ custom-flannel-892179 │ jenkins │ v1.37.0 │ 22 Dec 25 01:51 UTC │ 22 Dec 25 01:51 UTC │
	│ delete  │ -p custom-flannel-892179                                                                                                                       │ custom-flannel-892179 │ jenkins │ v1.37.0 │ 22 Dec 25 01:51 UTC │ 22 Dec 25 01:51 UTC │
	│ start   │ -p kindnet-892179 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd │ kindnet-892179        │ jenkins │ v1.37.0 │ 22 Dec 25 01:51 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/22 01:51:17
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1222 01:51:17.628051 1733459 out.go:360] Setting OutFile to fd 1 ...
	I1222 01:51:17.628363 1733459 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:51:17.628384 1733459 out.go:374] Setting ErrFile to fd 2...
	I1222 01:51:17.628391 1733459 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:51:17.628642 1733459 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1395000/.minikube/bin
	I1222 01:51:17.629086 1733459 out.go:368] Setting JSON to false
	I1222 01:51:17.629993 1733459 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":117230,"bootTime":1766251047,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1222 01:51:17.630067 1733459 start.go:143] virtualization:  
	I1222 01:51:17.634456 1733459 out.go:179] * [kindnet-892179] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1222 01:51:17.639026 1733459 out.go:179]   - MINIKUBE_LOCATION=22179
	I1222 01:51:17.639093 1733459 notify.go:221] Checking for updates...
	I1222 01:51:17.642686 1733459 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 01:51:17.645923 1733459 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-1395000/kubeconfig
	I1222 01:51:17.649038 1733459 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1395000/.minikube
	I1222 01:51:17.652191 1733459 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1222 01:51:17.655292 1733459 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 01:51:17.658917 1733459 config.go:182] Loaded profile config "no-preload-154186": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1222 01:51:17.659066 1733459 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 01:51:17.683771 1733459 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1222 01:51:17.683895 1733459 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:51:17.755270 1733459 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 01:51:17.746104519 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:51:17.755374 1733459 docker.go:319] overlay module found
	I1222 01:51:17.758730 1733459 out.go:179] * Using the docker driver based on user configuration
	I1222 01:51:17.761810 1733459 start.go:309] selected driver: docker
	I1222 01:51:17.761828 1733459 start.go:928] validating driver "docker" against <nil>
	I1222 01:51:17.761842 1733459 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 01:51:17.762646 1733459 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:51:17.819532 1733459 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 01:51:17.810301196 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:51:17.819693 1733459 start_flags.go:329] no existing cluster config was found, will generate one from the flags 
	I1222 01:51:17.819933 1733459 start_flags.go:995] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1222 01:51:17.822918 1733459 out.go:179] * Using Docker driver with root privileges
	I1222 01:51:17.825777 1733459 cni.go:84] Creating CNI manager for "kindnet"
	I1222 01:51:17.825808 1733459 start_flags.go:338] Found "CNI" CNI - setting NetworkPlugin=cni
	I1222 01:51:17.825880 1733459 start.go:353] cluster config:
	{Name:kindnet-892179 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:kindnet-892179 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:51:17.829123 1733459 out.go:179] * Starting "kindnet-892179" primary control-plane node in "kindnet-892179" cluster
	I1222 01:51:17.832042 1733459 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1222 01:51:17.835059 1733459 out.go:179] * Pulling base image v0.0.48-1766219634-22260 ...
	I1222 01:51:17.837933 1733459 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime containerd
	I1222 01:51:17.837981 1733459 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-containerd-overlay2-arm64.tar.lz4
	I1222 01:51:17.837995 1733459 cache.go:65] Caching tarball of preloaded images
	I1222 01:51:17.838010 1733459 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1222 01:51:17.838140 1733459 preload.go:251] Found /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1222 01:51:17.838155 1733459 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on containerd
	I1222 01:51:17.838285 1733459 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/kindnet-892179/config.json ...
	I1222 01:51:17.838313 1733459 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/kindnet-892179/config.json: {Name:mk48f69203e778a9b5c4ef5c672890b7dfb90b4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:51:17.858013 1733459 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon, skipping pull
	I1222 01:51:17.858039 1733459 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 exists in daemon, skipping load
	I1222 01:51:17.858060 1733459 cache.go:243] Successfully downloaded all kic artifacts
	I1222 01:51:17.858137 1733459 start.go:360] acquireMachinesLock for kindnet-892179: {Name:mk23b01cb850c2917d185d3253f5fe1fb77a1d44 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:51:17.858264 1733459 start.go:364] duration metric: took 103.131µs to acquireMachinesLock for "kindnet-892179"
	I1222 01:51:17.858295 1733459 start.go:93] Provisioning new machine with config: &{Name:kindnet-892179 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:kindnet-892179 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1222 01:51:17.858379 1733459 start.go:125] createHost starting for "" (driver="docker")
	I1222 01:51:17.861683 1733459 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1222 01:51:17.861916 1733459 start.go:159] libmachine.API.Create for "kindnet-892179" (driver="docker")
	I1222 01:51:17.861961 1733459 client.go:173] LocalClient.Create starting
	I1222 01:51:17.862046 1733459 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem
	I1222 01:51:17.862116 1733459 main.go:144] libmachine: Decoding PEM data...
	I1222 01:51:17.862142 1733459 main.go:144] libmachine: Parsing certificate...
	I1222 01:51:17.862201 1733459 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem
	I1222 01:51:17.862224 1733459 main.go:144] libmachine: Decoding PEM data...
	I1222 01:51:17.862239 1733459 main.go:144] libmachine: Parsing certificate...
	I1222 01:51:17.862617 1733459 cli_runner.go:164] Run: docker network inspect kindnet-892179 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1222 01:51:17.878986 1733459 cli_runner.go:211] docker network inspect kindnet-892179 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1222 01:51:17.879085 1733459 network_create.go:284] running [docker network inspect kindnet-892179] to gather additional debugging logs...
	I1222 01:51:17.879104 1733459 cli_runner.go:164] Run: docker network inspect kindnet-892179
	W1222 01:51:17.895255 1733459 cli_runner.go:211] docker network inspect kindnet-892179 returned with exit code 1
	I1222 01:51:17.895292 1733459 network_create.go:287] error running [docker network inspect kindnet-892179]: docker network inspect kindnet-892179: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kindnet-892179 not found
	I1222 01:51:17.895307 1733459 network_create.go:289] output of [docker network inspect kindnet-892179]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kindnet-892179 not found
	
	** /stderr **
	I1222 01:51:17.895403 1733459 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 01:51:17.912475 1733459 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-4235b17e4b35 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:fe:6e:e2:e5:a2:3e} reservation:<nil>}
	I1222 01:51:17.912879 1733459 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-f8ef64c3e2a9 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:e2:c4:2a:ab:ba:65} reservation:<nil>}
	I1222 01:51:17.913257 1733459 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-abf79bf53636 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:c2:97:f6:b2:21:18} reservation:<nil>}
	I1222 01:51:17.913795 1733459 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019d4eb0}
	I1222 01:51:17.913821 1733459 network_create.go:124] attempt to create docker network kindnet-892179 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1222 01:51:17.913880 1733459 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-892179 kindnet-892179
	I1222 01:51:17.975890 1733459 network_create.go:108] docker network kindnet-892179 192.168.76.0/24 created
	I1222 01:51:17.975928 1733459 kic.go:121] calculated static IP "192.168.76.2" for the "kindnet-892179" container
	I1222 01:51:17.976014 1733459 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1222 01:51:17.993821 1733459 cli_runner.go:164] Run: docker volume create kindnet-892179 --label name.minikube.sigs.k8s.io=kindnet-892179 --label created_by.minikube.sigs.k8s.io=true
	I1222 01:51:18.017324 1733459 oci.go:103] Successfully created a docker volume kindnet-892179
	I1222 01:51:18.017438 1733459 cli_runner.go:164] Run: docker run --rm --name kindnet-892179-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-892179 --entrypoint /usr/bin/test -v kindnet-892179:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -d /var/lib
	I1222 01:51:18.582561 1733459 oci.go:107] Successfully prepared a docker volume kindnet-892179
	I1222 01:51:18.582632 1733459 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime containerd
	I1222 01:51:18.582645 1733459 kic.go:194] Starting extracting preloaded images to volume ...
	I1222 01:51:18.582713 1733459 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v kindnet-892179:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -I lz4 -xf /preloaded.tar -C /extractDir
	I1222 01:51:22.520558 1733459 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v kindnet-892179:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -I lz4 -xf /preloaded.tar -C /extractDir: (3.937806849s)
	I1222 01:51:22.520594 1733459 kic.go:203] duration metric: took 3.937945304s to extract preloaded images to volume ...
	W1222 01:51:22.520747 1733459 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1222 01:51:22.520863 1733459 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1222 01:51:22.574174 1733459 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-892179 --name kindnet-892179 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-892179 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-892179 --network kindnet-892179 --ip 192.168.76.2 --volume kindnet-892179:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5
	I1222 01:51:22.901632 1733459 cli_runner.go:164] Run: docker container inspect kindnet-892179 --format={{.State.Running}}
	I1222 01:51:22.926995 1733459 cli_runner.go:164] Run: docker container inspect kindnet-892179 --format={{.State.Status}}
	I1222 01:51:22.953665 1733459 cli_runner.go:164] Run: docker exec kindnet-892179 stat /var/lib/dpkg/alternatives/iptables
	I1222 01:51:23.013362 1733459 oci.go:144] the created container "kindnet-892179" has a running status.
	I1222 01:51:23.013392 1733459 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/kindnet-892179/id_rsa...
	I1222 01:51:23.177087 1733459 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/kindnet-892179/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1222 01:51:23.213655 1733459 cli_runner.go:164] Run: docker container inspect kindnet-892179 --format={{.State.Status}}
	I1222 01:51:23.233255 1733459 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1222 01:51:23.233277 1733459 kic_runner.go:114] Args: [docker exec --privileged kindnet-892179 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1222 01:51:23.293931 1733459 cli_runner.go:164] Run: docker container inspect kindnet-892179 --format={{.State.Status}}
	I1222 01:51:23.323190 1733459 machine.go:94] provisionDockerMachine start ...
	I1222 01:51:23.323274 1733459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-892179
	I1222 01:51:23.348269 1733459 main.go:144] libmachine: Using SSH client type: native
	I1222 01:51:23.348621 1733459 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38732 <nil> <nil>}
	I1222 01:51:23.348631 1733459 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 01:51:23.349368 1733459 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37412->127.0.0.1:38732: read: connection reset by peer
	I1222 01:51:26.481489 1733459 main.go:144] libmachine: SSH cmd err, output: <nil>: kindnet-892179
	
	I1222 01:51:26.481514 1733459 ubuntu.go:182] provisioning hostname "kindnet-892179"
	I1222 01:51:26.481587 1733459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-892179
	I1222 01:51:26.503545 1733459 main.go:144] libmachine: Using SSH client type: native
	I1222 01:51:26.503922 1733459 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38732 <nil> <nil>}
	I1222 01:51:26.503943 1733459 main.go:144] libmachine: About to run SSH command:
	sudo hostname kindnet-892179 && echo "kindnet-892179" | sudo tee /etc/hostname
	I1222 01:51:26.647659 1733459 main.go:144] libmachine: SSH cmd err, output: <nil>: kindnet-892179
	
	I1222 01:51:26.647758 1733459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-892179
	I1222 01:51:26.664734 1733459 main.go:144] libmachine: Using SSH client type: native
	I1222 01:51:26.665053 1733459 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38732 <nil> <nil>}
	I1222 01:51:26.665075 1733459 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-892179' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-892179/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-892179' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1222 01:51:26.798331 1733459 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 01:51:26.798397 1733459 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22179-1395000/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-1395000/.minikube}
	I1222 01:51:26.798431 1733459 ubuntu.go:190] setting up certificates
	I1222 01:51:26.798449 1733459 provision.go:84] configureAuth start
	I1222 01:51:26.798510 1733459 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-892179
	I1222 01:51:26.815683 1733459 provision.go:143] copyHostCerts
	I1222 01:51:26.815754 1733459 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.pem, removing ...
	I1222 01:51:26.815771 1733459 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.pem
	I1222 01:51:26.815855 1733459 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.pem (1082 bytes)
	I1222 01:51:26.815991 1733459 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1395000/.minikube/cert.pem, removing ...
	I1222 01:51:26.816002 1733459 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1395000/.minikube/cert.pem
	I1222 01:51:26.816033 1733459 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-1395000/.minikube/cert.pem (1123 bytes)
	I1222 01:51:26.816093 1733459 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1395000/.minikube/key.pem, removing ...
	I1222 01:51:26.816103 1733459 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1395000/.minikube/key.pem
	I1222 01:51:26.816128 1733459 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-1395000/.minikube/key.pem (1679 bytes)
	I1222 01:51:26.816182 1733459 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca-key.pem org=jenkins.kindnet-892179 san=[127.0.0.1 192.168.76.2 kindnet-892179 localhost minikube]
	I1222 01:51:27.172251 1733459 provision.go:177] copyRemoteCerts
	I1222 01:51:27.172332 1733459 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1222 01:51:27.172377 1733459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-892179
	I1222 01:51:27.190032 1733459 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38732 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/kindnet-892179/id_rsa Username:docker}
	I1222 01:51:27.294534 1733459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1222 01:51:27.313044 1733459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1222 01:51:27.331371 1733459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1222 01:51:27.349800 1733459 provision.go:87] duration metric: took 551.326594ms to configureAuth
	I1222 01:51:27.349830 1733459 ubuntu.go:206] setting minikube options for container-runtime
	I1222 01:51:27.350014 1733459 config.go:182] Loaded profile config "kindnet-892179": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1222 01:51:27.350037 1733459 machine.go:97] duration metric: took 4.026822286s to provisionDockerMachine
	I1222 01:51:27.350045 1733459 client.go:176] duration metric: took 9.488073266s to LocalClient.Create
	I1222 01:51:27.350068 1733459 start.go:167] duration metric: took 9.488153136s to libmachine.API.Create "kindnet-892179"
	I1222 01:51:27.350105 1733459 start.go:293] postStartSetup for "kindnet-892179" (driver="docker")
	I1222 01:51:27.350116 1733459 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1222 01:51:27.350183 1733459 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1222 01:51:27.350227 1733459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-892179
	I1222 01:51:27.368320 1733459 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38732 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/kindnet-892179/id_rsa Username:docker}
	I1222 01:51:27.466230 1733459 ssh_runner.go:195] Run: cat /etc/os-release
	I1222 01:51:27.469391 1733459 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1222 01:51:27.469420 1733459 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1222 01:51:27.469437 1733459 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1395000/.minikube/addons for local assets ...
	I1222 01:51:27.469492 1733459 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1395000/.minikube/files for local assets ...
	I1222 01:51:27.469583 1733459 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem -> 13968642.pem in /etc/ssl/certs
	I1222 01:51:27.469695 1733459 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1222 01:51:27.477202 1733459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem --> /etc/ssl/certs/13968642.pem (1708 bytes)
	I1222 01:51:27.494862 1733459 start.go:296] duration metric: took 144.741114ms for postStartSetup
	I1222 01:51:27.495237 1733459 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-892179
	I1222 01:51:27.513148 1733459 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/kindnet-892179/config.json ...
	I1222 01:51:27.513436 1733459 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 01:51:27.513496 1733459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-892179
	I1222 01:51:27.530780 1733459 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38732 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/kindnet-892179/id_rsa Username:docker}
	I1222 01:51:27.623265 1733459 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1222 01:51:27.628055 1733459 start.go:128] duration metric: took 9.76966111s to createHost
	I1222 01:51:27.628083 1733459 start.go:83] releasing machines lock for "kindnet-892179", held for 9.76980594s
	I1222 01:51:27.628488 1733459 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-892179
	I1222 01:51:27.647291 1733459 ssh_runner.go:195] Run: cat /version.json
	I1222 01:51:27.647318 1733459 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1222 01:51:27.647345 1733459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-892179
	I1222 01:51:27.647396 1733459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-892179
	I1222 01:51:27.669083 1733459 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38732 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/kindnet-892179/id_rsa Username:docker}
	I1222 01:51:27.674287 1733459 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38732 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/kindnet-892179/id_rsa Username:docker}
	I1222 01:51:27.854958 1733459 ssh_runner.go:195] Run: systemctl --version
	I1222 01:51:27.861548 1733459 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1222 01:51:27.865824 1733459 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1222 01:51:27.865899 1733459 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1222 01:51:27.893219 1733459 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1222 01:51:27.893245 1733459 start.go:496] detecting cgroup driver to use...
	I1222 01:51:27.893277 1733459 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 01:51:27.893327 1733459 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1222 01:51:27.908476 1733459 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1222 01:51:27.921589 1733459 docker.go:218] disabling cri-docker service (if available) ...
	I1222 01:51:27.921659 1733459 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1222 01:51:27.938801 1733459 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1222 01:51:27.957205 1733459 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1222 01:51:28.097472 1733459 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1222 01:51:28.221936 1733459 docker.go:234] disabling docker service ...
	I1222 01:51:28.222056 1733459 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1222 01:51:28.244347 1733459 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1222 01:51:28.257675 1733459 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1222 01:51:28.370340 1733459 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1222 01:51:28.482201 1733459 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1222 01:51:28.496651 1733459 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 01:51:28.510209 1733459 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1222 01:51:28.519713 1733459 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1222 01:51:28.528585 1733459 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1222 01:51:28.528696 1733459 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1222 01:51:28.537521 1733459 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1222 01:51:28.546616 1733459 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1222 01:51:28.555925 1733459 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1222 01:51:28.564880 1733459 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1222 01:51:28.573503 1733459 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1222 01:51:28.582467 1733459 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1222 01:51:28.591293 1733459 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1222 01:51:28.600131 1733459 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1222 01:51:28.608122 1733459 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1222 01:51:28.615564 1733459 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:51:28.747718 1733459 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1222 01:51:28.891422 1733459 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1222 01:51:28.891497 1733459 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1222 01:51:28.895682 1733459 start.go:564] Will wait 60s for crictl version
	I1222 01:51:28.895752 1733459 ssh_runner.go:195] Run: which crictl
	I1222 01:51:28.899784 1733459 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1222 01:51:28.927480 1733459 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.1
	RuntimeApiVersion:  v1
	I1222 01:51:28.927560 1733459 ssh_runner.go:195] Run: containerd --version
	I1222 01:51:28.953009 1733459 ssh_runner.go:195] Run: containerd --version
	I1222 01:51:28.980895 1733459 out.go:179] * Preparing Kubernetes v1.34.3 on containerd 2.2.1 ...
	I1222 01:51:28.983962 1733459 cli_runner.go:164] Run: docker network inspect kindnet-892179 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 01:51:29.007429 1733459 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1222 01:51:29.011426 1733459 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 01:51:29.021893 1733459 kubeadm.go:884] updating cluster {Name:kindnet-892179 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:kindnet-892179 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1222 01:51:29.022022 1733459 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime containerd
	I1222 01:51:29.022129 1733459 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 01:51:29.049407 1733459 containerd.go:627] all images are preloaded for containerd runtime.
	I1222 01:51:29.049432 1733459 containerd.go:534] Images already preloaded, skipping extraction
	I1222 01:51:29.049499 1733459 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 01:51:29.074691 1733459 containerd.go:627] all images are preloaded for containerd runtime.
	I1222 01:51:29.074715 1733459 cache_images.go:86] Images are preloaded, skipping loading
	I1222 01:51:29.074725 1733459 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.3 containerd true true} ...
	I1222 01:51:29.074822 1733459 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kindnet-892179 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:kindnet-892179 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I1222 01:51:29.074896 1733459 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
	I1222 01:51:29.100828 1733459 cni.go:84] Creating CNI manager for "kindnet"
	I1222 01:51:29.100864 1733459 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1222 01:51:29.100922 1733459 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-892179 NodeName:kindnet-892179 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1222 01:51:29.101090 1733459 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "kindnet-892179"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1222 01:51:29.101168 1733459 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1222 01:51:29.109075 1733459 binaries.go:51] Found k8s binaries, skipping transfer
	I1222 01:51:29.109149 1733459 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1222 01:51:29.117030 1733459 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1222 01:51:29.130209 1733459 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1222 01:51:29.143619 1733459 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1222 01:51:29.157925 1733459 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1222 01:51:29.161852 1733459 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 01:51:29.172236 1733459 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:51:29.292112 1733459 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 01:51:29.308977 1733459 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/kindnet-892179 for IP: 192.168.76.2
	I1222 01:51:29.309047 1733459 certs.go:195] generating shared ca certs ...
	I1222 01:51:29.309078 1733459 certs.go:227] acquiring lock for ca certs: {Name:mk4e1172e73c8d9b926824a39d7e920772302ed7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:51:29.309279 1733459 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.key
	I1222 01:51:29.309352 1733459 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.key
	I1222 01:51:29.309374 1733459 certs.go:257] generating profile certs ...
	I1222 01:51:29.309467 1733459 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/kindnet-892179/client.key
	I1222 01:51:29.309505 1733459 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/kindnet-892179/client.crt with IP's: []
	I1222 01:51:29.451954 1733459 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/kindnet-892179/client.crt ...
	I1222 01:51:29.451988 1733459 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/kindnet-892179/client.crt: {Name:mk035d50361f62ffa7fa1291017c511d03e7b533 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:51:29.452227 1733459 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/kindnet-892179/client.key ...
	I1222 01:51:29.452246 1733459 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/kindnet-892179/client.key: {Name:mka78aa30099c3870d1a53400ea46759b6ea5924 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:51:29.452352 1733459 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/kindnet-892179/apiserver.key.55f78538
	I1222 01:51:29.452372 1733459 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/kindnet-892179/apiserver.crt.55f78538 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1222 01:51:29.736102 1733459 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/kindnet-892179/apiserver.crt.55f78538 ...
	I1222 01:51:29.736135 1733459 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/kindnet-892179/apiserver.crt.55f78538: {Name:mk67b3ac0ea7e9a8005afb97eeffff5b6ba68f89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:51:29.736341 1733459 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/kindnet-892179/apiserver.key.55f78538 ...
	I1222 01:51:29.736393 1733459 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/kindnet-892179/apiserver.key.55f78538: {Name:mk1cf156648f037e2bc5ff30175e610f30b082b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:51:29.736532 1733459 certs.go:382] copying /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/kindnet-892179/apiserver.crt.55f78538 -> /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/kindnet-892179/apiserver.crt
	I1222 01:51:29.736620 1733459 certs.go:386] copying /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/kindnet-892179/apiserver.key.55f78538 -> /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/kindnet-892179/apiserver.key
	I1222 01:51:29.736685 1733459 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/kindnet-892179/proxy-client.key
	I1222 01:51:29.736704 1733459 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/kindnet-892179/proxy-client.crt with IP's: []
	I1222 01:51:30.107071 1733459 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/kindnet-892179/proxy-client.crt ...
	I1222 01:51:30.107111 1733459 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/kindnet-892179/proxy-client.crt: {Name:mkc4add3f12dfc4f74d9c37c78bb370db683b4aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:51:30.107354 1733459 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/kindnet-892179/proxy-client.key ...
	I1222 01:51:30.107374 1733459 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/kindnet-892179/proxy-client.key: {Name:mk447df8c3de20d49eaad0108b00c9329da0bda9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:51:30.107602 1733459 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/1396864.pem (1338 bytes)
	W1222 01:51:30.107651 1733459 certs.go:480] ignoring /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/1396864_empty.pem, impossibly tiny 0 bytes
	I1222 01:51:30.107676 1733459 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca-key.pem (1675 bytes)
	I1222 01:51:30.107707 1733459 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem (1082 bytes)
	I1222 01:51:30.107736 1733459 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem (1123 bytes)
	I1222 01:51:30.107766 1733459 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/key.pem (1679 bytes)
	I1222 01:51:30.107819 1733459 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem (1708 bytes)
	I1222 01:51:30.108461 1733459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1222 01:51:30.130122 1733459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1222 01:51:30.151736 1733459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1222 01:51:30.171713 1733459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1222 01:51:30.193490 1733459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/kindnet-892179/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1222 01:51:30.215718 1733459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/kindnet-892179/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1222 01:51:30.235310 1733459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/kindnet-892179/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1222 01:51:30.257207 1733459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/kindnet-892179/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1222 01:51:30.276998 1733459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1222 01:51:30.296225 1733459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/1396864.pem --> /usr/share/ca-certificates/1396864.pem (1338 bytes)
	I1222 01:51:30.316427 1733459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem --> /usr/share/ca-certificates/13968642.pem (1708 bytes)
	I1222 01:51:30.335367 1733459 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1222 01:51:30.348359 1733459 ssh_runner.go:195] Run: openssl version
	I1222 01:51:30.354682 1733459 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:51:30.362305 1733459 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1222 01:51:30.370363 1733459 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:51:30.374752 1733459 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 00:04 /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:51:30.374820 1733459 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:51:30.415629 1733459 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1222 01:51:30.423347 1733459 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1222 01:51:30.430772 1733459 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1396864.pem
	I1222 01:51:30.438425 1733459 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1396864.pem /etc/ssl/certs/1396864.pem
	I1222 01:51:30.446050 1733459 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1396864.pem
	I1222 01:51:30.449781 1733459 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 00:13 /usr/share/ca-certificates/1396864.pem
	I1222 01:51:30.449854 1733459 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1396864.pem
	I1222 01:51:30.490734 1733459 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1222 01:51:30.498444 1733459 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1396864.pem /etc/ssl/certs/51391683.0
	I1222 01:51:30.505987 1733459 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/13968642.pem
	I1222 01:51:30.513706 1733459 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/13968642.pem /etc/ssl/certs/13968642.pem
	I1222 01:51:30.521385 1733459 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13968642.pem
	I1222 01:51:30.525337 1733459 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 00:13 /usr/share/ca-certificates/13968642.pem
	I1222 01:51:30.525461 1733459 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13968642.pem
	I1222 01:51:30.566381 1733459 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1222 01:51:30.574408 1733459 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/13968642.pem /etc/ssl/certs/3ec20f2e.0
	I1222 01:51:30.582031 1733459 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 01:51:30.585943 1733459 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1222 01:51:30.586007 1733459 kubeadm.go:401] StartCluster: {Name:kindnet-892179 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:kindnet-892179 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:51:30.586169 1733459 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1222 01:51:30.586230 1733459 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 01:51:30.611869 1733459 cri.go:96] found id: ""
	I1222 01:51:30.611944 1733459 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1222 01:51:30.619990 1733459 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1222 01:51:30.627987 1733459 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1222 01:51:30.628104 1733459 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 01:51:30.635997 1733459 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1222 01:51:30.636018 1733459 kubeadm.go:158] found existing configuration files:
	
	I1222 01:51:30.636074 1733459 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1222 01:51:30.643856 1733459 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1222 01:51:30.643924 1733459 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1222 01:51:30.651252 1733459 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1222 01:51:30.659132 1733459 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1222 01:51:30.659200 1733459 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 01:51:30.666750 1733459 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1222 01:51:30.675219 1733459 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1222 01:51:30.675337 1733459 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 01:51:30.682852 1733459 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1222 01:51:30.690911 1733459 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1222 01:51:30.690982 1733459 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 01:51:30.698861 1733459 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1222 01:51:30.744677 1733459 kubeadm.go:319] [init] Using Kubernetes version: v1.34.3
	I1222 01:51:30.745105 1733459 kubeadm.go:319] [preflight] Running pre-flight checks
	I1222 01:51:30.775047 1733459 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1222 01:51:30.775188 1733459 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1222 01:51:30.775257 1733459 kubeadm.go:319] OS: Linux
	I1222 01:51:30.775327 1733459 kubeadm.go:319] CGROUPS_CPU: enabled
	I1222 01:51:30.775404 1733459 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1222 01:51:30.775474 1733459 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1222 01:51:30.775553 1733459 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1222 01:51:30.775627 1733459 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1222 01:51:30.775715 1733459 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1222 01:51:30.775782 1733459 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1222 01:51:30.775872 1733459 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1222 01:51:30.775953 1733459 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1222 01:51:30.848266 1733459 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1222 01:51:30.848439 1733459 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1222 01:51:30.848569 1733459 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1222 01:51:30.855101 1733459 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1222 01:51:30.861904 1733459 out.go:252]   - Generating certificates and keys ...
	I1222 01:51:30.862071 1733459 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1222 01:51:30.862204 1733459 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1222 01:51:31.385161 1733459 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1222 01:51:31.979728 1733459 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1222 01:51:32.293852 1733459 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 22 01:36:29 no-preload-154186 containerd[555]: time="2025-12-22T01:36:29.325328676Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 22 01:36:29 no-preload-154186 containerd[555]: time="2025-12-22T01:36:29.325350666Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 22 01:36:29 no-preload-154186 containerd[555]: time="2025-12-22T01:36:29.325391659Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 22 01:36:29 no-preload-154186 containerd[555]: time="2025-12-22T01:36:29.325406576Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 22 01:36:29 no-preload-154186 containerd[555]: time="2025-12-22T01:36:29.325416947Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 22 01:36:29 no-preload-154186 containerd[555]: time="2025-12-22T01:36:29.325430043Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 22 01:36:29 no-preload-154186 containerd[555]: time="2025-12-22T01:36:29.325439348Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 22 01:36:29 no-preload-154186 containerd[555]: time="2025-12-22T01:36:29.325458006Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 22 01:36:29 no-preload-154186 containerd[555]: time="2025-12-22T01:36:29.325472513Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 22 01:36:29 no-preload-154186 containerd[555]: time="2025-12-22T01:36:29.325505277Z" level=info msg="Connect containerd service"
	Dec 22 01:36:29 no-preload-154186 containerd[555]: time="2025-12-22T01:36:29.325765062Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 22 01:36:29 no-preload-154186 containerd[555]: time="2025-12-22T01:36:29.326389970Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 22 01:36:29 no-preload-154186 containerd[555]: time="2025-12-22T01:36:29.344703104Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 22 01:36:29 no-preload-154186 containerd[555]: time="2025-12-22T01:36:29.344887163Z" level=info msg="Start subscribing containerd event"
	Dec 22 01:36:29 no-preload-154186 containerd[555]: time="2025-12-22T01:36:29.345002741Z" level=info msg="Start recovering state"
	Dec 22 01:36:29 no-preload-154186 containerd[555]: time="2025-12-22T01:36:29.344959344Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 22 01:36:29 no-preload-154186 containerd[555]: time="2025-12-22T01:36:29.364047164Z" level=info msg="Start event monitor"
	Dec 22 01:36:29 no-preload-154186 containerd[555]: time="2025-12-22T01:36:29.364252630Z" level=info msg="Start cni network conf syncer for default"
	Dec 22 01:36:29 no-preload-154186 containerd[555]: time="2025-12-22T01:36:29.364336438Z" level=info msg="Start streaming server"
	Dec 22 01:36:29 no-preload-154186 containerd[555]: time="2025-12-22T01:36:29.364415274Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 22 01:36:29 no-preload-154186 containerd[555]: time="2025-12-22T01:36:29.364479832Z" level=info msg="runtime interface starting up..."
	Dec 22 01:36:29 no-preload-154186 containerd[555]: time="2025-12-22T01:36:29.364536588Z" level=info msg="starting plugins..."
	Dec 22 01:36:29 no-preload-154186 containerd[555]: time="2025-12-22T01:36:29.364617967Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 22 01:36:29 no-preload-154186 containerd[555]: time="2025-12-22T01:36:29.364818765Z" level=info msg="containerd successfully booted in 0.063428s"
	Dec 22 01:36:29 no-preload-154186 systemd[1]: Started containerd.service - containerd container runtime.
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:51:35.703593    8113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:35.704532    8113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:35.706282    8113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:35.706863    8113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:51:35.708580    8113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec22 00:03] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 01:51:35 up 1 day,  8:34,  0 user,  load average: 1.77, 1.63, 1.55
	Linux no-preload-154186 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 22 01:51:32 no-preload-154186 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 01:51:33 no-preload-154186 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1202.
	Dec 22 01:51:33 no-preload-154186 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:51:33 no-preload-154186 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:51:33 no-preload-154186 kubelet[7983]: E1222 01:51:33.295045    7983 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 01:51:33 no-preload-154186 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 01:51:33 no-preload-154186 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 01:51:33 no-preload-154186 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1203.
	Dec 22 01:51:33 no-preload-154186 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:51:33 no-preload-154186 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:51:34 no-preload-154186 kubelet[7989]: E1222 01:51:34.121030    7989 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 01:51:34 no-preload-154186 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 01:51:34 no-preload-154186 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 01:51:34 no-preload-154186 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1204.
	Dec 22 01:51:34 no-preload-154186 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:51:34 no-preload-154186 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:51:34 no-preload-154186 kubelet[8023]: E1222 01:51:34.879866    8023 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 01:51:34 no-preload-154186 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 01:51:34 no-preload-154186 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 01:51:35 no-preload-154186 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1205.
	Dec 22 01:51:35 no-preload-154186 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:51:35 no-preload-154186 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:51:35 no-preload-154186 kubelet[8122]: E1222 01:51:35.819588    8122 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 01:51:35 no-preload-154186 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 01:51:35 no-preload-154186 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-154186 -n no-preload-154186
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-154186 -n no-preload-154186: exit status 2 (421.709544ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "no-preload-154186" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.40s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (9.55s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-869293 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-869293 -n newest-cni-869293
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-869293 -n newest-cni-869293: exit status 2 (305.327418ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-pause apiserver status = "Stopped"; want = "Paused"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-869293 -n newest-cni-869293
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-869293 -n newest-cni-869293: exit status 2 (390.518797ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-869293 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-869293 -n newest-cni-869293
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-869293 -n newest-cni-869293: exit status 2 (328.150312ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-unpause apiserver status = "Stopped"; want = "Running"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-869293 -n newest-cni-869293
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-869293 -n newest-cni-869293: exit status 2 (307.098404ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-unpause kubelet status = "Stopped"; want = "Running"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-869293
helpers_test.go:244: (dbg) docker inspect newest-cni-869293:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "05e1fe12904ba3e2f69bb80f55f1eaac2e2f59b2b380c077c133943a7ff0a16e",
	        "Created": "2025-12-22T01:28:35.561963158Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1685878,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-22T01:38:31.964858425Z",
	            "FinishedAt": "2025-12-22T01:38:30.65991944Z"
	        },
	        "Image": "sha256:065a636b8735485f57df2b02ed6532902f189a9c5dc304ae0ae68a778e1c9b2c",
	        "ResolvConfPath": "/var/lib/docker/containers/05e1fe12904ba3e2f69bb80f55f1eaac2e2f59b2b380c077c133943a7ff0a16e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/05e1fe12904ba3e2f69bb80f55f1eaac2e2f59b2b380c077c133943a7ff0a16e/hostname",
	        "HostsPath": "/var/lib/docker/containers/05e1fe12904ba3e2f69bb80f55f1eaac2e2f59b2b380c077c133943a7ff0a16e/hosts",
	        "LogPath": "/var/lib/docker/containers/05e1fe12904ba3e2f69bb80f55f1eaac2e2f59b2b380c077c133943a7ff0a16e/05e1fe12904ba3e2f69bb80f55f1eaac2e2f59b2b380c077c133943a7ff0a16e-json.log",
	        "Name": "/newest-cni-869293",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-869293:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-869293",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "05e1fe12904ba3e2f69bb80f55f1eaac2e2f59b2b380c077c133943a7ff0a16e",
	                "LowerDir": "/var/lib/docker/overlay2/158fbaf3ed5d82f864f18e0e961f02de684f670ee24faf71f1ec33887e356946-init/diff:/var/lib/docker/overlay2/fdfc0dccbb3b6766b7981a5669907f62e120bbe774767ccbee34e6115374625f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/158fbaf3ed5d82f864f18e0e961f02de684f670ee24faf71f1ec33887e356946/merged",
	                "UpperDir": "/var/lib/docker/overlay2/158fbaf3ed5d82f864f18e0e961f02de684f670ee24faf71f1ec33887e356946/diff",
	                "WorkDir": "/var/lib/docker/overlay2/158fbaf3ed5d82f864f18e0e961f02de684f670ee24faf71f1ec33887e356946/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-869293",
	                "Source": "/var/lib/docker/volumes/newest-cni-869293/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-869293",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-869293",
	                "name.minikube.sigs.k8s.io": "newest-cni-869293",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e62360fe6e0fa793fd3d0004ae901a019cba72f07e506d4e4de6097400773d18",
	            "SandboxKey": "/var/run/docker/netns/e62360fe6e0f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38707"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38708"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38711"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38709"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38710"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-869293": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "06:95:8a:54:97:ec",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "237b6ac5b33ea8f647685859c16cf161283b5f3d52eea65816f2e7dfeb4ec191",
	                    "EndpointID": "5a4926332b20d8c327aefbaecbda7375782c9a567c1a86203a3a41986fbfb8d5",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-869293",
	                        "05e1fe12904b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-869293 -n newest-cni-869293
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-869293 -n newest-cni-869293: exit status 2 (321.939238ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-869293 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-869293 logs -n 25: (1.600971592s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───
──────────────────┐
	│ COMMAND │                                                                                                                           ARGS                                                                                                                           │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───
──────────────────┤
	│ delete  │ -p default-k8s-diff-port-778490                                                                                                                                                                                                                          │ default-k8s-diff-port-778490 │ jenkins │ v1.37.0 │ 22 Dec 25 01:25 UTC │ 22 Dec 25 01:26 UTC │
	│ delete  │ -p default-k8s-diff-port-778490                                                                                                                                                                                                                          │ default-k8s-diff-port-778490 │ jenkins │ v1.37.0 │ 22 Dec 25 01:26 UTC │ 22 Dec 25 01:26 UTC │
	│ delete  │ -p disable-driver-mounts-459348                                                                                                                                                                                                                          │ disable-driver-mounts-459348 │ jenkins │ v1.37.0 │ 22 Dec 25 01:26 UTC │ 22 Dec 25 01:26 UTC │
	│ start   │ -p no-preload-154186 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-154186            │ jenkins │ v1.37.0 │ 22 Dec 25 01:26 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-980842 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                 │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:27 UTC │ 22 Dec 25 01:27 UTC │
	│ stop    │ -p embed-certs-980842 --alsologtostderr -v=3                                                                                                                                                                                                             │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:27 UTC │ 22 Dec 25 01:27 UTC │
	│ addons  │ enable dashboard -p embed-certs-980842 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                            │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:27 UTC │ 22 Dec 25 01:27 UTC │
	│ start   │ -p embed-certs-980842 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                                             │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:27 UTC │ 22 Dec 25 01:28 UTC │
	│ image   │ embed-certs-980842 image list --format=json                                                                                                                                                                                                              │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:28 UTC │ 22 Dec 25 01:28 UTC │
	│ pause   │ -p embed-certs-980842 --alsologtostderr -v=1                                                                                                                                                                                                             │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:28 UTC │ 22 Dec 25 01:28 UTC │
	│ unpause │ -p embed-certs-980842 --alsologtostderr -v=1                                                                                                                                                                                                             │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:28 UTC │ 22 Dec 25 01:28 UTC │
	│ delete  │ -p embed-certs-980842                                                                                                                                                                                                                                    │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:28 UTC │ 22 Dec 25 01:28 UTC │
	│ delete  │ -p embed-certs-980842                                                                                                                                                                                                                                    │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:28 UTC │ 22 Dec 25 01:28 UTC │
	│ start   │ -p newest-cni-869293 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ newest-cni-869293            │ jenkins │ v1.37.0 │ 22 Dec 25 01:28 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-154186 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                  │ no-preload-154186            │ jenkins │ v1.37.0 │ 22 Dec 25 01:34 UTC │                     │
	│ stop    │ -p no-preload-154186 --alsologtostderr -v=3                                                                                                                                                                                                              │ no-preload-154186            │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │ 22 Dec 25 01:36 UTC │
	│ addons  │ enable dashboard -p no-preload-154186 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                             │ no-preload-154186            │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │ 22 Dec 25 01:36 UTC │
	│ start   │ -p no-preload-154186 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-154186            │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-869293 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                  │ newest-cni-869293            │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │                     │
	│ stop    │ -p newest-cni-869293 --alsologtostderr -v=3                                                                                                                                                                                                              │ newest-cni-869293            │ jenkins │ v1.37.0 │ 22 Dec 25 01:38 UTC │ 22 Dec 25 01:38 UTC │
	│ addons  │ enable dashboard -p newest-cni-869293 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                             │ newest-cni-869293            │ jenkins │ v1.37.0 │ 22 Dec 25 01:38 UTC │ 22 Dec 25 01:38 UTC │
	│ start   │ -p newest-cni-869293 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ newest-cni-869293            │ jenkins │ v1.37.0 │ 22 Dec 25 01:38 UTC │                     │
	│ image   │ newest-cni-869293 image list --format=json                                                                                                                                                                                                               │ newest-cni-869293            │ jenkins │ v1.37.0 │ 22 Dec 25 01:44 UTC │ 22 Dec 25 01:44 UTC │
	│ pause   │ -p newest-cni-869293 --alsologtostderr -v=1                                                                                                                                                                                                              │ newest-cni-869293            │ jenkins │ v1.37.0 │ 22 Dec 25 01:44 UTC │ 22 Dec 25 01:44 UTC │
	│ unpause │ -p newest-cni-869293 --alsologtostderr -v=1                                                                                                                                                                                                              │ newest-cni-869293            │ jenkins │ v1.37.0 │ 22 Dec 25 01:44 UTC │ 22 Dec 25 01:44 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───
──────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/22 01:38:31
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1222 01:38:31.686572 1685746 out.go:360] Setting OutFile to fd 1 ...
	I1222 01:38:31.686782 1685746 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:38:31.686816 1685746 out.go:374] Setting ErrFile to fd 2...
	I1222 01:38:31.686836 1685746 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:38:31.687133 1685746 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1395000/.minikube/bin
	I1222 01:38:31.687563 1685746 out.go:368] Setting JSON to false
	I1222 01:38:31.688584 1685746 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":116465,"bootTime":1766251047,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1222 01:38:31.688686 1685746 start.go:143] virtualization:  
	I1222 01:38:31.691576 1685746 out.go:179] * [newest-cni-869293] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1222 01:38:31.695464 1685746 out.go:179]   - MINIKUBE_LOCATION=22179
	I1222 01:38:31.695552 1685746 notify.go:221] Checking for updates...
	I1222 01:38:31.701535 1685746 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 01:38:31.704637 1685746 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-1395000/kubeconfig
	I1222 01:38:31.707560 1685746 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1395000/.minikube
	I1222 01:38:31.710534 1685746 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1222 01:38:31.713575 1685746 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 01:38:31.717166 1685746 config.go:182] Loaded profile config "newest-cni-869293": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1222 01:38:31.717762 1685746 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 01:38:31.753414 1685746 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1222 01:38:31.753539 1685746 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:38:31.812499 1685746 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 01:38:31.803096079 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:38:31.812613 1685746 docker.go:319] overlay module found
	I1222 01:38:31.815770 1685746 out.go:179] * Using the docker driver based on existing profile
	I1222 01:38:31.818545 1685746 start.go:309] selected driver: docker
	I1222 01:38:31.818566 1685746 start.go:928] validating driver "docker" against &{Name:newest-cni-869293 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-869293 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:38:31.818662 1685746 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 01:38:31.819384 1685746 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:38:31.880587 1685746 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 01:38:31.870819289 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:38:31.880955 1685746 start_flags.go:1014] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1222 01:38:31.880984 1685746 cni.go:84] Creating CNI manager for ""
	I1222 01:38:31.881038 1685746 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1222 01:38:31.881081 1685746 start.go:353] cluster config:
	{Name:newest-cni-869293 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-869293 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:38:31.884279 1685746 out.go:179] * Starting "newest-cni-869293" primary control-plane node in "newest-cni-869293" cluster
	I1222 01:38:31.887056 1685746 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1222 01:38:31.890043 1685746 out.go:179] * Pulling base image v0.0.48-1766219634-22260 ...
	I1222 01:38:31.892868 1685746 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1222 01:38:31.892919 1685746 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4
	I1222 01:38:31.892932 1685746 cache.go:65] Caching tarball of preloaded images
	I1222 01:38:31.892952 1685746 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1222 01:38:31.893022 1685746 preload.go:251] Found /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1222 01:38:31.893039 1685746 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on containerd
	I1222 01:38:31.893153 1685746 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/config.json ...
	I1222 01:38:31.913018 1685746 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon, skipping pull
	I1222 01:38:31.913041 1685746 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 exists in daemon, skipping load
	I1222 01:38:31.913060 1685746 cache.go:243] Successfully downloaded all kic artifacts
	I1222 01:38:31.913090 1685746 start.go:360] acquireMachinesLock for newest-cni-869293: {Name:mke3b6ee6da4cf7fd78c9e3f2e52f52ceafca24c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:38:31.913180 1685746 start.go:364] duration metric: took 44.275µs to acquireMachinesLock for "newest-cni-869293"
	I1222 01:38:31.913204 1685746 start.go:96] Skipping create...Using existing machine configuration
	I1222 01:38:31.913210 1685746 fix.go:54] fixHost starting: 
	I1222 01:38:31.913477 1685746 cli_runner.go:164] Run: docker container inspect newest-cni-869293 --format={{.State.Status}}
	I1222 01:38:31.930780 1685746 fix.go:112] recreateIfNeeded on newest-cni-869293: state=Stopped err=<nil>
	W1222 01:38:31.930815 1685746 fix.go:138] unexpected machine state, will restart: <nil>
	W1222 01:38:29.750532 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:38:32.248109 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:38:31.934050 1685746 out.go:252] * Restarting existing docker container for "newest-cni-869293" ...
	I1222 01:38:31.934152 1685746 cli_runner.go:164] Run: docker start newest-cni-869293
	I1222 01:38:32.204881 1685746 cli_runner.go:164] Run: docker container inspect newest-cni-869293 --format={{.State.Status}}
	I1222 01:38:32.243691 1685746 kic.go:430] container "newest-cni-869293" state is running.
	I1222 01:38:32.244096 1685746 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-869293
	I1222 01:38:32.265947 1685746 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/config.json ...
	I1222 01:38:32.266210 1685746 machine.go:94] provisionDockerMachine start ...
	I1222 01:38:32.266268 1685746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:38:32.293919 1685746 main.go:144] libmachine: Using SSH client type: native
	I1222 01:38:32.294281 1685746 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38707 <nil> <nil>}
	I1222 01:38:32.294292 1685746 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 01:38:32.294932 1685746 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54476->127.0.0.1:38707: read: connection reset by peer
	I1222 01:38:35.433786 1685746 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-869293
	
	I1222 01:38:35.433813 1685746 ubuntu.go:182] provisioning hostname "newest-cni-869293"
	I1222 01:38:35.433886 1685746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:38:35.451516 1685746 main.go:144] libmachine: Using SSH client type: native
	I1222 01:38:35.451830 1685746 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38707 <nil> <nil>}
	I1222 01:38:35.451848 1685746 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-869293 && echo "newest-cni-869293" | sudo tee /etc/hostname
	I1222 01:38:35.591409 1685746 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-869293
	
	I1222 01:38:35.591519 1685746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:38:35.609341 1685746 main.go:144] libmachine: Using SSH client type: native
	I1222 01:38:35.609647 1685746 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38707 <nil> <nil>}
	I1222 01:38:35.609670 1685746 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-869293' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-869293/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-869293' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1222 01:38:35.742798 1685746 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 01:38:35.742824 1685746 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22179-1395000/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-1395000/.minikube}
	I1222 01:38:35.742864 1685746 ubuntu.go:190] setting up certificates
	I1222 01:38:35.742881 1685746 provision.go:84] configureAuth start
	I1222 01:38:35.742942 1685746 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-869293
	I1222 01:38:35.763152 1685746 provision.go:143] copyHostCerts
	I1222 01:38:35.763214 1685746 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.pem, removing ...
	I1222 01:38:35.763230 1685746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.pem
	I1222 01:38:35.763306 1685746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.pem (1082 bytes)
	I1222 01:38:35.763401 1685746 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1395000/.minikube/cert.pem, removing ...
	I1222 01:38:35.763407 1685746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1395000/.minikube/cert.pem
	I1222 01:38:35.763431 1685746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-1395000/.minikube/cert.pem (1123 bytes)
	I1222 01:38:35.763483 1685746 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1395000/.minikube/key.pem, removing ...
	I1222 01:38:35.763490 1685746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1395000/.minikube/key.pem
	I1222 01:38:35.763514 1685746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-1395000/.minikube/key.pem (1679 bytes)
	I1222 01:38:35.763557 1685746 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca-key.pem org=jenkins.newest-cni-869293 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-869293]
	I1222 01:38:35.889485 1685746 provision.go:177] copyRemoteCerts
	I1222 01:38:35.889557 1685746 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1222 01:38:35.889605 1685746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:38:35.914143 1685746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38707 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/newest-cni-869293/id_rsa Username:docker}
	I1222 01:38:36.016150 1685746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1222 01:38:36.035930 1685746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1222 01:38:36.054716 1685746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1222 01:38:36.072586 1685746 provision.go:87] duration metric: took 329.680992ms to configureAuth
	I1222 01:38:36.072618 1685746 ubuntu.go:206] setting minikube options for container-runtime
	I1222 01:38:36.072830 1685746 config.go:182] Loaded profile config "newest-cni-869293": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1222 01:38:36.072842 1685746 machine.go:97] duration metric: took 3.806623107s to provisionDockerMachine
	I1222 01:38:36.072850 1685746 start.go:293] postStartSetup for "newest-cni-869293" (driver="docker")
	I1222 01:38:36.072866 1685746 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1222 01:38:36.072926 1685746 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1222 01:38:36.072980 1685746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:38:36.090324 1685746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38707 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/newest-cni-869293/id_rsa Username:docker}
	I1222 01:38:36.187013 1685746 ssh_runner.go:195] Run: cat /etc/os-release
	I1222 01:38:36.191029 1685746 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1222 01:38:36.191111 1685746 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1222 01:38:36.191134 1685746 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1395000/.minikube/addons for local assets ...
	I1222 01:38:36.191215 1685746 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1395000/.minikube/files for local assets ...
	I1222 01:38:36.191355 1685746 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem -> 13968642.pem in /etc/ssl/certs
	I1222 01:38:36.191477 1685746 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1222 01:38:36.200008 1685746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem --> /etc/ssl/certs/13968642.pem (1708 bytes)
	I1222 01:38:36.219292 1685746 start.go:296] duration metric: took 146.420744ms for postStartSetup
	I1222 01:38:36.219381 1685746 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 01:38:36.219430 1685746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:38:36.237412 1685746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38707 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/newest-cni-869293/id_rsa Username:docker}
	I1222 01:38:36.336664 1685746 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1222 01:38:36.342619 1685746 fix.go:56] duration metric: took 4.429400761s for fixHost
	I1222 01:38:36.342646 1685746 start.go:83] releasing machines lock for "newest-cni-869293", held for 4.429452897s
	I1222 01:38:36.342750 1685746 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-869293
	I1222 01:38:36.362211 1685746 ssh_runner.go:195] Run: cat /version.json
	I1222 01:38:36.362264 1685746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:38:36.362344 1685746 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1222 01:38:36.362407 1685746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:38:36.385216 1685746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38707 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/newest-cni-869293/id_rsa Username:docker}
	I1222 01:38:36.393122 1685746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38707 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/newest-cni-869293/id_rsa Username:docker}
	I1222 01:38:36.571819 1685746 ssh_runner.go:195] Run: systemctl --version
	I1222 01:38:36.578591 1685746 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1222 01:38:36.583121 1685746 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1222 01:38:36.583193 1685746 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1222 01:38:36.591539 1685746 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1222 01:38:36.591564 1685746 start.go:496] detecting cgroup driver to use...
	I1222 01:38:36.591620 1685746 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 01:38:36.591689 1685746 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1222 01:38:36.609980 1685746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1222 01:38:36.623763 1685746 docker.go:218] disabling cri-docker service (if available) ...
	I1222 01:38:36.623883 1685746 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1222 01:38:36.639236 1685746 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1222 01:38:36.652937 1685746 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1222 01:38:36.763224 1685746 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1222 01:38:36.883204 1685746 docker.go:234] disabling docker service ...
	I1222 01:38:36.883275 1685746 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1222 01:38:36.898372 1685746 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1222 01:38:36.911453 1685746 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1222 01:38:37.034252 1685746 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1222 01:38:37.157335 1685746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1222 01:38:37.170564 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 01:38:37.185195 1685746 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1222 01:38:37.194710 1685746 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1222 01:38:37.204647 1685746 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1222 01:38:37.204731 1685746 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1222 01:38:37.214808 1685746 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1222 01:38:37.223830 1685746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1222 01:38:37.232600 1685746 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1222 01:38:37.242680 1685746 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1222 01:38:37.254369 1685746 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1222 01:38:37.265094 1685746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1222 01:38:37.278711 1685746 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1222 01:38:37.288297 1685746 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1222 01:38:37.299386 1685746 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1222 01:38:37.306803 1685746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:38:37.412668 1685746 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1222 01:38:37.531042 1685746 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1222 01:38:37.531187 1685746 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1222 01:38:37.535291 1685746 start.go:564] Will wait 60s for crictl version
	I1222 01:38:37.535398 1685746 ssh_runner.go:195] Run: which crictl
	I1222 01:38:37.539239 1685746 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1222 01:38:37.568186 1685746 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.1
	RuntimeApiVersion:  v1
	I1222 01:38:37.568329 1685746 ssh_runner.go:195] Run: containerd --version
	I1222 01:38:37.589324 1685746 ssh_runner.go:195] Run: containerd --version
	I1222 01:38:37.614497 1685746 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.1 ...
	I1222 01:38:37.617592 1685746 cli_runner.go:164] Run: docker network inspect newest-cni-869293 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 01:38:37.633737 1685746 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1222 01:38:37.637631 1685746 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 01:38:37.650774 1685746 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1222 01:38:34.249047 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:38:36.748953 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:38:37.653725 1685746 kubeadm.go:884] updating cluster {Name:newest-cni-869293 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-869293 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1222 01:38:37.653882 1685746 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1222 01:38:37.653965 1685746 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 01:38:37.679481 1685746 containerd.go:627] all images are preloaded for containerd runtime.
	I1222 01:38:37.679507 1685746 containerd.go:534] Images already preloaded, skipping extraction
	I1222 01:38:37.679567 1685746 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 01:38:37.707944 1685746 containerd.go:627] all images are preloaded for containerd runtime.
	I1222 01:38:37.707969 1685746 cache_images.go:86] Images are preloaded, skipping loading
	I1222 01:38:37.707979 1685746 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-rc.1 containerd true true} ...
	I1222 01:38:37.708083 1685746 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-869293 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-869293 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1222 01:38:37.708165 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
	I1222 01:38:37.740577 1685746 cni.go:84] Creating CNI manager for ""
	I1222 01:38:37.740600 1685746 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1222 01:38:37.740621 1685746 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1222 01:38:37.740645 1685746 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-869293 NodeName:newest-cni-869293 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1222 01:38:37.740759 1685746 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-869293"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1222 01:38:37.740831 1685746 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1222 01:38:37.749395 1685746 binaries.go:51] Found k8s binaries, skipping transfer
	I1222 01:38:37.749470 1685746 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1222 01:38:37.757587 1685746 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1222 01:38:37.770794 1685746 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1222 01:38:37.784049 1685746 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2233 bytes)
	I1222 01:38:37.797792 1685746 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1222 01:38:37.801552 1685746 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 01:38:37.811598 1685746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:38:37.940636 1685746 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 01:38:37.962625 1685746 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293 for IP: 192.168.76.2
	I1222 01:38:37.962649 1685746 certs.go:195] generating shared ca certs ...
	I1222 01:38:37.962682 1685746 certs.go:227] acquiring lock for ca certs: {Name:mk4e1172e73c8d9b926824a39d7e920772302ed7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:38:37.962837 1685746 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.key
	I1222 01:38:37.962900 1685746 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.key
	I1222 01:38:37.962912 1685746 certs.go:257] generating profile certs ...
	I1222 01:38:37.963014 1685746 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/client.key
	I1222 01:38:37.963084 1685746 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/apiserver.key.70db33ce
	I1222 01:38:37.963128 1685746 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/proxy-client.key
	I1222 01:38:37.963238 1685746 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/1396864.pem (1338 bytes)
	W1222 01:38:37.963276 1685746 certs.go:480] ignoring /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/1396864_empty.pem, impossibly tiny 0 bytes
	I1222 01:38:37.963287 1685746 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca-key.pem (1675 bytes)
	I1222 01:38:37.963316 1685746 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem (1082 bytes)
	I1222 01:38:37.963343 1685746 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem (1123 bytes)
	I1222 01:38:37.963379 1685746 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/key.pem (1679 bytes)
	I1222 01:38:37.963434 1685746 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem (1708 bytes)
	I1222 01:38:37.964596 1685746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1222 01:38:37.999913 1685746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1222 01:38:38.025465 1685746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1222 01:38:38.053443 1685746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1222 01:38:38.087732 1685746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1222 01:38:38.107200 1685746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1222 01:38:38.125482 1685746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1222 01:38:38.143284 1685746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1222 01:38:38.161557 1685746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem --> /usr/share/ca-certificates/13968642.pem (1708 bytes)
	I1222 01:38:38.180124 1685746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1222 01:38:38.198446 1685746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/1396864.pem --> /usr/share/ca-certificates/1396864.pem (1338 bytes)
	I1222 01:38:38.215766 1685746 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1222 01:38:38.228774 1685746 ssh_runner.go:195] Run: openssl version
	I1222 01:38:38.235631 1685746 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/13968642.pem
	I1222 01:38:38.244039 1685746 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/13968642.pem /etc/ssl/certs/13968642.pem
	I1222 01:38:38.252123 1685746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13968642.pem
	I1222 01:38:38.256169 1685746 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 00:13 /usr/share/ca-certificates/13968642.pem
	I1222 01:38:38.256240 1685746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13968642.pem
	I1222 01:38:38.297738 1685746 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1222 01:38:38.305673 1685746 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:38:38.313250 1685746 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1222 01:38:38.321143 1685746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:38:38.325161 1685746 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 00:04 /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:38:38.325259 1685746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:38:38.366760 1685746 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1222 01:38:38.375589 1685746 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1396864.pem
	I1222 01:38:38.383142 1685746 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1396864.pem /etc/ssl/certs/1396864.pem
	I1222 01:38:38.391262 1685746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1396864.pem
	I1222 01:38:38.395405 1685746 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 00:13 /usr/share/ca-certificates/1396864.pem
	I1222 01:38:38.395474 1685746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1396864.pem
	I1222 01:38:38.436708 1685746 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1222 01:38:38.444445 1685746 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 01:38:38.448390 1685746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1222 01:38:38.489618 1685746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1222 01:38:38.530725 1685746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1222 01:38:38.571636 1685746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1222 01:38:38.612592 1685746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1222 01:38:38.653872 1685746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1222 01:38:38.695135 1685746 kubeadm.go:401] StartCluster: {Name:newest-cni-869293 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-869293 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.
L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:38:38.695236 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1222 01:38:38.695304 1685746 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 01:38:38.730406 1685746 cri.go:96] found id: ""
	I1222 01:38:38.730480 1685746 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1222 01:38:38.742929 1685746 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1222 01:38:38.742952 1685746 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1222 01:38:38.743012 1685746 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1222 01:38:38.765617 1685746 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1222 01:38:38.766245 1685746 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-869293" does not appear in /home/jenkins/minikube-integration/22179-1395000/kubeconfig
	I1222 01:38:38.766510 1685746 kubeconfig.go:62] /home/jenkins/minikube-integration/22179-1395000/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-869293" cluster setting kubeconfig missing "newest-cni-869293" context setting]
	I1222 01:38:38.766957 1685746 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/kubeconfig: {Name:mk08a9ce05fb3d2aec66c2d4776a38c1f64c42df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:38:38.768687 1685746 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1222 01:38:38.776658 1685746 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1222 01:38:38.776695 1685746 kubeadm.go:602] duration metric: took 33.737033ms to restartPrimaryControlPlane
	I1222 01:38:38.776705 1685746 kubeadm.go:403] duration metric: took 81.581475ms to StartCluster
	I1222 01:38:38.776720 1685746 settings.go:142] acquiring lock: {Name:mk10fbc51e5384d725fd46ab36048358281cb87b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:38:38.776793 1685746 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22179-1395000/kubeconfig
	I1222 01:38:38.777670 1685746 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/kubeconfig: {Name:mk08a9ce05fb3d2aec66c2d4776a38c1f64c42df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:38:38.777888 1685746 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1222 01:38:38.778285 1685746 config.go:182] Loaded profile config "newest-cni-869293": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1222 01:38:38.778259 1685746 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1222 01:38:38.778393 1685746 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-869293"
	I1222 01:38:38.778408 1685746 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-869293"
	I1222 01:38:38.778433 1685746 host.go:66] Checking if "newest-cni-869293" exists ...
	I1222 01:38:38.778917 1685746 cli_runner.go:164] Run: docker container inspect newest-cni-869293 --format={{.State.Status}}
	I1222 01:38:38.779098 1685746 addons.go:70] Setting dashboard=true in profile "newest-cni-869293"
	I1222 01:38:38.779126 1685746 addons.go:239] Setting addon dashboard=true in "newest-cni-869293"
	W1222 01:38:38.779211 1685746 addons.go:248] addon dashboard should already be in state true
	I1222 01:38:38.779264 1685746 host.go:66] Checking if "newest-cni-869293" exists ...
	I1222 01:38:38.779355 1685746 addons.go:70] Setting default-storageclass=true in profile "newest-cni-869293"
	I1222 01:38:38.779382 1685746 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-869293"
	I1222 01:38:38.779657 1685746 cli_runner.go:164] Run: docker container inspect newest-cni-869293 --format={{.State.Status}}
	I1222 01:38:38.780717 1685746 cli_runner.go:164] Run: docker container inspect newest-cni-869293 --format={{.State.Status}}
	I1222 01:38:38.783183 1685746 out.go:179] * Verifying Kubernetes components...
	I1222 01:38:38.795835 1685746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:38:38.839727 1685746 addons.go:239] Setting addon default-storageclass=true in "newest-cni-869293"
	I1222 01:38:38.839773 1685746 host.go:66] Checking if "newest-cni-869293" exists ...
	I1222 01:38:38.844706 1685746 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1222 01:38:38.844788 1685746 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1222 01:38:38.845056 1685746 cli_runner.go:164] Run: docker container inspect newest-cni-869293 --format={{.State.Status}}
	I1222 01:38:38.847706 1685746 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 01:38:38.847732 1685746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1222 01:38:38.847798 1685746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:38:38.850623 1685746 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1222 01:38:38.856243 1685746 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1222 01:38:38.856273 1685746 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1222 01:38:38.856351 1685746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:38:38.873943 1685746 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1222 01:38:38.873976 1685746 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1222 01:38:38.874046 1685746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:38:38.897069 1685746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38707 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/newest-cni-869293/id_rsa Username:docker}
	I1222 01:38:38.917887 1685746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38707 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/newest-cni-869293/id_rsa Username:docker}
	I1222 01:38:38.925239 1685746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38707 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/newest-cni-869293/id_rsa Username:docker}
	I1222 01:38:39.040289 1685746 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 01:38:39.062591 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 01:38:39.071403 1685746 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1222 01:38:39.071429 1685746 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1222 01:38:39.085714 1685746 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1222 01:38:39.085742 1685746 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1222 01:38:39.113564 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1222 01:38:39.117642 1685746 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1222 01:38:39.117668 1685746 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1222 01:38:39.160317 1685746 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1222 01:38:39.160342 1685746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1222 01:38:39.179666 1685746 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1222 01:38:39.179693 1685746 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1222 01:38:39.195940 1685746 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1222 01:38:39.195967 1685746 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1222 01:38:39.211128 1685746 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1222 01:38:39.211152 1685746 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1222 01:38:39.229341 1685746 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1222 01:38:39.229367 1685746 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1222 01:38:39.242863 1685746 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1222 01:38:39.242891 1685746 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1222 01:38:39.257396 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1222 01:38:39.740898 1685746 api_server.go:52] waiting for apiserver process to appear ...
	I1222 01:38:39.740996 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:38:39.741091 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:38:39.741148 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:39.741150 1685746 retry.go:84] will retry after 300ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:38:39.741362 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:39.924082 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:38:39.987453 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:40.012530 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 01:38:40.076254 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:38:40.106299 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:38:40.156991 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:40.241110 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:40.291973 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1222 01:38:40.350617 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:38:40.361182 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:40.389531 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:38:40.437774 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:38:40.465333 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:40.692837 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1222 01:38:40.741460 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:38:40.766384 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:40.961925 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 01:38:40.997418 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:38:41.047996 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:38:41.103696 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:41.241962 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:41.674831 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:38:39.248045 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:38:41.248244 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:38:41.741299 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:38:41.744404 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:42.118142 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:38:42.189177 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:42.241414 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:42.263947 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:38:42.333305 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:42.741698 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:43.241589 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:43.265699 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:38:43.338843 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:43.509282 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 01:38:43.559893 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:38:43.581660 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:38:43.623026 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:43.741112 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:44.241130 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:44.741229 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:44.931703 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:38:45.008485 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:45.244431 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:45.741178 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:45.765524 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:38:45.843868 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:45.977122 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:38:46.040374 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:46.241453 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:46.486248 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:38:46.559168 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:38:43.248311 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:38:45.249134 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:38:47.748896 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:38:46.741869 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:47.241095 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:47.741431 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:48.241112 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:48.294921 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:38:48.361284 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:48.741773 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:48.852570 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:38:48.911873 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:49.241377 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:49.368148 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:38:49.429800 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:49.741220 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:50.241219 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:50.741547 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:51.241159 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:38:49.748932 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:38:52.248838 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:38:51.741774 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:52.241901 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:52.391494 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:38:52.452597 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:52.452636 1685746 retry.go:84] will retry after 11.6s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:52.508552 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:38:52.579056 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:52.741603 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:53.241037 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:53.297681 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:38:53.358617 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:53.741128 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:54.241259 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:54.741444 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:55.241131 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:55.741185 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:56.241903 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:38:54.748014 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:38:56.748733 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:38:56.742022 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:56.871217 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:38:56.931377 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:56.931421 1685746 retry.go:84] will retry after 12.9s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:57.241904 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:57.741132 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:58.241082 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:58.741129 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:59.241514 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:59.741571 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:00.241104 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:00.342627 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:39:00.433191 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:39:00.433235 1685746 retry.go:84] will retry after 8.1s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:39:00.741833 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:01.241455 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:38:59.248212 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:39:01.248492 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:39:01.741502 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:02.241599 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:02.741070 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:03.241152 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:03.741076 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:04.041996 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:39:04.111760 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:39:04.111812 1685746 retry.go:84] will retry after 10s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:39:04.242089 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:04.741350 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:05.241736 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:05.741098 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:06.241279 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:39:03.747982 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:39:05.748583 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:39:07.748998 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:39:06.742311 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:07.241927 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:07.741133 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:08.241157 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:08.532510 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:39:08.603273 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:39:08.603314 1685746 retry.go:84] will retry after 7.8s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:39:08.741625 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:09.241616 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:09.741180 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:09.845450 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:39:09.907468 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:39:10.242040 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:10.742004 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:11.242043 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:39:10.248934 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:39:12.748076 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:39:11.741028 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:12.241114 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:12.741779 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:13.241398 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:13.741757 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:14.084932 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:39:14.149870 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:39:14.149915 1685746 retry.go:84] will retry after 13.2s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:39:14.241288 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:14.742009 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:15.241500 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:15.741459 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:16.241659 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:16.395227 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:39:16.456949 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:39:14.748959 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:39:17.248674 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:39:16.741507 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:17.241459 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:17.741042 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:18.241111 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:18.741162 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:19.241875 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:19.741715 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:20.241732 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:20.741105 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:21.241347 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:39:19.748622 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:39:21.749035 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:39:21.741639 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:22.241911 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:22.742051 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:23.241970 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:23.741127 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:24.241560 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:24.741692 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:25.241106 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:25.741122 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:26.241137 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:39:24.248544 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:39:26.747990 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:39:26.741585 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:27.241155 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:27.301256 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:39:27.375517 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:39:27.375598 1685746 retry.go:84] will retry after 26.5s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:39:27.741076 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:28.241034 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:28.741642 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:29.226555 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 01:39:29.242011 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:39:29.291422 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:39:29.741622 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:30.245888 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:30.741105 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:31.241550 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:39:28.748186 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:39:30.748280 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:39:32.748600 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:39:31.741066 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:32.241183 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:32.741695 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:33.241134 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:33.741807 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:34.241685 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:34.741125 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:35.241915 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:35.741241 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:36.241639 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:39:35.249008 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:39:37.748582 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:39:36.741652 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:37.241141 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:37.741891 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:38.054310 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:39:38.118505 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:39:38.118547 1685746 retry.go:84] will retry after 47.2s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:39:38.241764 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:38.741459 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:39.241609 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:39:39.241696 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:39:39.269891 1685746 cri.go:96] found id: ""
	I1222 01:39:39.269914 1685746 logs.go:282] 0 containers: []
	W1222 01:39:39.269923 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:39:39.269930 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:39:39.269991 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:39:39.300389 1685746 cri.go:96] found id: ""
	I1222 01:39:39.300414 1685746 logs.go:282] 0 containers: []
	W1222 01:39:39.300423 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:39:39.300430 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:39:39.300501 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:39:39.326557 1685746 cri.go:96] found id: ""
	I1222 01:39:39.326582 1685746 logs.go:282] 0 containers: []
	W1222 01:39:39.326592 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:39:39.326598 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:39:39.326697 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:39:39.354049 1685746 cri.go:96] found id: ""
	I1222 01:39:39.354115 1685746 logs.go:282] 0 containers: []
	W1222 01:39:39.354125 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:39:39.354132 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:39:39.354202 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:39:39.380457 1685746 cri.go:96] found id: ""
	I1222 01:39:39.380490 1685746 logs.go:282] 0 containers: []
	W1222 01:39:39.380500 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:39:39.380507 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:39:39.380577 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:39:39.407039 1685746 cri.go:96] found id: ""
	I1222 01:39:39.407062 1685746 logs.go:282] 0 containers: []
	W1222 01:39:39.407070 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:39:39.407076 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:39:39.407139 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:39:39.431541 1685746 cri.go:96] found id: ""
	I1222 01:39:39.431568 1685746 logs.go:282] 0 containers: []
	W1222 01:39:39.431577 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:39:39.431584 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:39:39.431676 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:39:39.457555 1685746 cri.go:96] found id: ""
	I1222 01:39:39.457588 1685746 logs.go:282] 0 containers: []
	W1222 01:39:39.457607 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:39:39.457616 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:39:39.457629 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:39:39.517907 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:39:39.517997 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:39:39.534348 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:39:39.534373 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:39:39.607407 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:39:39.598204    1848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:39.599085    1848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:39.600659    1848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:39.601166    1848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:39.602826    1848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:39:39.598204    1848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:39.599085    1848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:39.600659    1848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:39.601166    1848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:39.602826    1848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:39:39.607438 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:39:39.607463 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:39:39.634050 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:39:39.634094 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 01:39:40.248054 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:39:42.748083 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:39:42.163786 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:42.176868 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:39:42.176959 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:39:42.208642 1685746 cri.go:96] found id: ""
	I1222 01:39:42.208672 1685746 logs.go:282] 0 containers: []
	W1222 01:39:42.208682 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:39:42.208688 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:39:42.208757 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:39:42.249523 1685746 cri.go:96] found id: ""
	I1222 01:39:42.249552 1685746 logs.go:282] 0 containers: []
	W1222 01:39:42.249562 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:39:42.249569 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:39:42.249641 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:39:42.283515 1685746 cri.go:96] found id: ""
	I1222 01:39:42.283542 1685746 logs.go:282] 0 containers: []
	W1222 01:39:42.283550 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:39:42.283557 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:39:42.283659 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:39:42.312237 1685746 cri.go:96] found id: ""
	I1222 01:39:42.312260 1685746 logs.go:282] 0 containers: []
	W1222 01:39:42.312269 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:39:42.312276 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:39:42.312335 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:39:42.341269 1685746 cri.go:96] found id: ""
	I1222 01:39:42.341297 1685746 logs.go:282] 0 containers: []
	W1222 01:39:42.341306 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:39:42.341312 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:39:42.341374 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:39:42.367696 1685746 cri.go:96] found id: ""
	I1222 01:39:42.367723 1685746 logs.go:282] 0 containers: []
	W1222 01:39:42.367732 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:39:42.367739 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:39:42.367804 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:39:42.396577 1685746 cri.go:96] found id: ""
	I1222 01:39:42.396602 1685746 logs.go:282] 0 containers: []
	W1222 01:39:42.396612 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:39:42.396618 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:39:42.396689 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:39:42.426348 1685746 cri.go:96] found id: ""
	I1222 01:39:42.426380 1685746 logs.go:282] 0 containers: []
	W1222 01:39:42.426392 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:39:42.426413 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:39:42.426433 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:39:42.481969 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:39:42.482005 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:39:42.499357 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:39:42.499436 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:39:42.576627 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:39:42.568338    1966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:42.569235    1966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:42.570839    1966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:42.571221    1966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:42.572936    1966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:39:42.568338    1966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:42.569235    1966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:42.570839    1966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:42.571221    1966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:42.572936    1966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:39:42.576649 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:39:42.576663 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:39:42.601751 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:39:42.601784 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:39:45.131239 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:45.157288 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:39:45.157379 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:39:45.207917 1685746 cri.go:96] found id: ""
	I1222 01:39:45.207953 1685746 logs.go:282] 0 containers: []
	W1222 01:39:45.207963 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:39:45.207975 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:39:45.208042 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:39:45.255413 1685746 cri.go:96] found id: ""
	I1222 01:39:45.255448 1685746 logs.go:282] 0 containers: []
	W1222 01:39:45.255459 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:39:45.255467 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:39:45.255564 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:39:45.300163 1685746 cri.go:96] found id: ""
	I1222 01:39:45.300196 1685746 logs.go:282] 0 containers: []
	W1222 01:39:45.300206 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:39:45.300214 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:39:45.300285 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:39:45.348918 1685746 cri.go:96] found id: ""
	I1222 01:39:45.348943 1685746 logs.go:282] 0 containers: []
	W1222 01:39:45.348952 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:39:45.348959 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:39:45.349022 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:39:45.379477 1685746 cri.go:96] found id: ""
	I1222 01:39:45.379502 1685746 logs.go:282] 0 containers: []
	W1222 01:39:45.379512 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:39:45.379518 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:39:45.379580 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:39:45.410514 1685746 cri.go:96] found id: ""
	I1222 01:39:45.410535 1685746 logs.go:282] 0 containers: []
	W1222 01:39:45.410543 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:39:45.410550 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:39:45.410611 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:39:45.436661 1685746 cri.go:96] found id: ""
	I1222 01:39:45.436686 1685746 logs.go:282] 0 containers: []
	W1222 01:39:45.436695 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:39:45.436702 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:39:45.436769 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:39:45.466972 1685746 cri.go:96] found id: ""
	I1222 01:39:45.467001 1685746 logs.go:282] 0 containers: []
	W1222 01:39:45.467010 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:39:45.467019 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:39:45.467032 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:39:45.567688 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:39:45.558837    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:45.559539    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:45.561202    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:45.561740    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:45.563191    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:39:45.558837    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:45.559539    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:45.561202    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:45.561740    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:45.563191    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:39:45.567712 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:39:45.567731 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:39:45.593712 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:39:45.593757 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:39:45.626150 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:39:45.626179 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:39:45.681273 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:39:45.681310 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1222 01:39:44.748908 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:39:47.248090 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:39:48.196684 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:48.207640 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:39:48.207718 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:39:48.232650 1685746 cri.go:96] found id: ""
	I1222 01:39:48.232680 1685746 logs.go:282] 0 containers: []
	W1222 01:39:48.232688 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:39:48.232708 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:39:48.232772 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:39:48.264801 1685746 cri.go:96] found id: ""
	I1222 01:39:48.264831 1685746 logs.go:282] 0 containers: []
	W1222 01:39:48.264841 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:39:48.264848 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:39:48.264915 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:39:48.300270 1685746 cri.go:96] found id: ""
	I1222 01:39:48.300300 1685746 logs.go:282] 0 containers: []
	W1222 01:39:48.300310 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:39:48.300317 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:39:48.300388 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:39:48.334711 1685746 cri.go:96] found id: ""
	I1222 01:39:48.334782 1685746 logs.go:282] 0 containers: []
	W1222 01:39:48.334806 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:39:48.334821 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:39:48.334898 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:39:48.359955 1685746 cri.go:96] found id: ""
	I1222 01:39:48.360023 1685746 logs.go:282] 0 containers: []
	W1222 01:39:48.360038 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:39:48.360052 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:39:48.360124 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:39:48.386551 1685746 cri.go:96] found id: ""
	I1222 01:39:48.386574 1685746 logs.go:282] 0 containers: []
	W1222 01:39:48.386583 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:39:48.386589 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:39:48.386648 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:39:48.412026 1685746 cri.go:96] found id: ""
	I1222 01:39:48.412052 1685746 logs.go:282] 0 containers: []
	W1222 01:39:48.412062 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:39:48.412069 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:39:48.412129 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:39:48.440847 1685746 cri.go:96] found id: ""
	I1222 01:39:48.440870 1685746 logs.go:282] 0 containers: []
	W1222 01:39:48.440878 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:39:48.440887 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:39:48.440897 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:39:48.496591 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:39:48.496673 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:39:48.512755 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:39:48.512834 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:39:48.596174 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:39:48.587854    2187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:48.588773    2187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:48.590542    2187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:48.590879    2187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:48.592406    2187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:39:48.587854    2187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:48.588773    2187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:48.590542    2187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:48.590879    2187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:48.592406    2187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:39:48.596249 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:39:48.596281 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:39:48.621362 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:39:48.621397 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:39:51.155431 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:51.169542 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:39:51.169616 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:39:51.195476 1685746 cri.go:96] found id: ""
	I1222 01:39:51.195500 1685746 logs.go:282] 0 containers: []
	W1222 01:39:51.195509 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:39:51.195516 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:39:51.195585 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:39:51.220215 1685746 cri.go:96] found id: ""
	I1222 01:39:51.220240 1685746 logs.go:282] 0 containers: []
	W1222 01:39:51.220249 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:39:51.220255 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:39:51.220324 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:39:51.248478 1685746 cri.go:96] found id: ""
	I1222 01:39:51.248508 1685746 logs.go:282] 0 containers: []
	W1222 01:39:51.248527 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:39:51.248534 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:39:51.248594 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:39:51.282587 1685746 cri.go:96] found id: ""
	I1222 01:39:51.282615 1685746 logs.go:282] 0 containers: []
	W1222 01:39:51.282624 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:39:51.282630 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:39:51.282691 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:39:51.310999 1685746 cri.go:96] found id: ""
	I1222 01:39:51.311029 1685746 logs.go:282] 0 containers: []
	W1222 01:39:51.311038 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:39:51.311044 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:39:51.311105 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:39:51.338337 1685746 cri.go:96] found id: ""
	I1222 01:39:51.338414 1685746 logs.go:282] 0 containers: []
	W1222 01:39:51.338431 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:39:51.338438 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:39:51.338517 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:39:51.365554 1685746 cri.go:96] found id: ""
	I1222 01:39:51.365582 1685746 logs.go:282] 0 containers: []
	W1222 01:39:51.365591 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:39:51.365598 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:39:51.365656 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:39:51.389874 1685746 cri.go:96] found id: ""
	I1222 01:39:51.389903 1685746 logs.go:282] 0 containers: []
	W1222 01:39:51.389913 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:39:51.389922 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:39:51.389933 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:39:51.449732 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:39:51.449797 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:39:51.467573 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:39:51.467669 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:39:51.568437 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:39:51.561021    2299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:51.561697    2299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:51.563323    2299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:51.563918    2299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:51.564974    2299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:39:51.561021    2299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:51.561697    2299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:51.563323    2299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:51.563918    2299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:51.564974    2299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:39:51.568512 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:39:51.568561 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:39:51.595758 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:39:51.595841 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 01:39:49.249046 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:39:51.748032 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:39:53.905270 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:39:53.968241 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:39:53.968406 1685746 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1222 01:39:54.129563 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:54.143910 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:39:54.144012 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:39:54.169973 1685746 cri.go:96] found id: ""
	I1222 01:39:54.170009 1685746 logs.go:282] 0 containers: []
	W1222 01:39:54.170018 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:39:54.170042 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:39:54.170158 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:39:54.198811 1685746 cri.go:96] found id: ""
	I1222 01:39:54.198838 1685746 logs.go:282] 0 containers: []
	W1222 01:39:54.198847 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:39:54.198854 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:39:54.198917 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:39:54.224425 1685746 cri.go:96] found id: ""
	I1222 01:39:54.224452 1685746 logs.go:282] 0 containers: []
	W1222 01:39:54.224462 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:39:54.224468 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:39:54.224549 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:39:54.273957 1685746 cri.go:96] found id: ""
	I1222 01:39:54.273983 1685746 logs.go:282] 0 containers: []
	W1222 01:39:54.273992 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:39:54.273998 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:39:54.274059 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:39:54.306801 1685746 cri.go:96] found id: ""
	I1222 01:39:54.306826 1685746 logs.go:282] 0 containers: []
	W1222 01:39:54.306836 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:39:54.306842 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:39:54.306916 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:39:54.339513 1685746 cri.go:96] found id: ""
	I1222 01:39:54.339539 1685746 logs.go:282] 0 containers: []
	W1222 01:39:54.339548 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:39:54.339555 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:39:54.339617 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:39:54.365259 1685746 cri.go:96] found id: ""
	I1222 01:39:54.365285 1685746 logs.go:282] 0 containers: []
	W1222 01:39:54.365295 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:39:54.365301 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:39:54.365363 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:39:54.390271 1685746 cri.go:96] found id: ""
	I1222 01:39:54.390294 1685746 logs.go:282] 0 containers: []
	W1222 01:39:54.390303 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:39:54.390312 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:39:54.390324 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:39:54.445696 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:39:54.445728 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:39:54.460676 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:39:54.460751 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:39:54.537038 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:39:54.528481    2418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:54.529359    2418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:54.531148    2418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:54.531443    2418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:54.533489    2418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:39:54.528481    2418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:54.529359    2418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:54.531148    2418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:54.531443    2418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:54.533489    2418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:39:54.537060 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:39:54.537075 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:39:54.566201 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:39:54.566234 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 01:39:53.749035 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:39:56.248725 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:39:57.093953 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:57.104681 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:39:57.104755 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:39:57.132428 1685746 cri.go:96] found id: ""
	I1222 01:39:57.132455 1685746 logs.go:282] 0 containers: []
	W1222 01:39:57.132465 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:39:57.132472 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:39:57.132532 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:39:57.158487 1685746 cri.go:96] found id: ""
	I1222 01:39:57.158512 1685746 logs.go:282] 0 containers: []
	W1222 01:39:57.158521 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:39:57.158528 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:39:57.158589 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:39:57.184175 1685746 cri.go:96] found id: ""
	I1222 01:39:57.184203 1685746 logs.go:282] 0 containers: []
	W1222 01:39:57.184213 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:39:57.184219 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:39:57.184279 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:39:57.215724 1685746 cri.go:96] found id: ""
	I1222 01:39:57.215752 1685746 logs.go:282] 0 containers: []
	W1222 01:39:57.215761 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:39:57.215768 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:39:57.215830 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:39:57.252375 1685746 cri.go:96] found id: ""
	I1222 01:39:57.252408 1685746 logs.go:282] 0 containers: []
	W1222 01:39:57.252420 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:39:57.252427 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:39:57.252499 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:39:57.291286 1685746 cri.go:96] found id: ""
	I1222 01:39:57.291323 1685746 logs.go:282] 0 containers: []
	W1222 01:39:57.291333 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:39:57.291344 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:39:57.291408 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:39:57.322496 1685746 cri.go:96] found id: ""
	I1222 01:39:57.322577 1685746 logs.go:282] 0 containers: []
	W1222 01:39:57.322594 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:39:57.322602 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:39:57.322678 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:39:57.352695 1685746 cri.go:96] found id: ""
	I1222 01:39:57.352722 1685746 logs.go:282] 0 containers: []
	W1222 01:39:57.352731 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:39:57.352741 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:39:57.352754 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:39:57.410232 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:39:57.410271 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:39:57.425451 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:39:57.425481 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:39:57.498123 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:39:57.489260    2529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:57.490135    2529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:57.491903    2529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:57.492235    2529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:57.493977    2529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:39:57.489260    2529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:57.490135    2529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:57.491903    2529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:57.492235    2529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:57.493977    2529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:39:57.498197 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:39:57.498226 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:39:57.530586 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:39:57.530677 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:40:00.062361 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:00.152699 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:00.152784 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:00.243584 1685746 cri.go:96] found id: ""
	I1222 01:40:00.243618 1685746 logs.go:282] 0 containers: []
	W1222 01:40:00.243635 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:00.243645 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:00.243728 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:00.323644 1685746 cri.go:96] found id: ""
	I1222 01:40:00.323704 1685746 logs.go:282] 0 containers: []
	W1222 01:40:00.323720 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:00.323730 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:00.323805 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:00.411473 1685746 cri.go:96] found id: ""
	I1222 01:40:00.411502 1685746 logs.go:282] 0 containers: []
	W1222 01:40:00.411521 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:00.411532 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:00.411621 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:00.511894 1685746 cri.go:96] found id: ""
	I1222 01:40:00.511922 1685746 logs.go:282] 0 containers: []
	W1222 01:40:00.511933 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:00.511941 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:00.512015 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:00.575706 1685746 cri.go:96] found id: ""
	I1222 01:40:00.575736 1685746 logs.go:282] 0 containers: []
	W1222 01:40:00.575746 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:00.575753 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:00.575828 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:00.666886 1685746 cri.go:96] found id: ""
	I1222 01:40:00.666913 1685746 logs.go:282] 0 containers: []
	W1222 01:40:00.666922 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:00.666929 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:00.667011 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:00.704456 1685746 cri.go:96] found id: ""
	I1222 01:40:00.704490 1685746 logs.go:282] 0 containers: []
	W1222 01:40:00.704499 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:00.704513 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:00.704583 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:00.763369 1685746 cri.go:96] found id: ""
	I1222 01:40:00.763404 1685746 logs.go:282] 0 containers: []
	W1222 01:40:00.763415 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:00.763425 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:00.763439 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:00.822507 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:00.822546 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:40:00.839492 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:00.839529 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:00.911350 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:00.902540    2639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:00.903387    2639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:00.905036    2639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:00.905557    2639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:00.907072    2639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:00.902540    2639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:00.903387    2639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:00.905036    2639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:00.905557    2639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:00.907072    2639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:00.911374 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:00.911389 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:40:00.937901 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:00.937953 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:40:01.674108 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:39:58.748290 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:40:00.756406 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:40:01.748211 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:40:01.748257 1685746 retry.go:84] will retry after 28.8s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:40:03.469297 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:03.480071 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:03.480145 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:03.519512 1685746 cri.go:96] found id: ""
	I1222 01:40:03.519627 1685746 logs.go:282] 0 containers: []
	W1222 01:40:03.519661 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:03.519709 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:03.520078 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:03.555737 1685746 cri.go:96] found id: ""
	I1222 01:40:03.555763 1685746 logs.go:282] 0 containers: []
	W1222 01:40:03.555806 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:03.555819 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:03.555909 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:03.580955 1685746 cri.go:96] found id: ""
	I1222 01:40:03.580986 1685746 logs.go:282] 0 containers: []
	W1222 01:40:03.580995 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:03.581004 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:03.581068 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:03.610855 1685746 cri.go:96] found id: ""
	I1222 01:40:03.610935 1685746 logs.go:282] 0 containers: []
	W1222 01:40:03.610952 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:03.610961 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:03.611037 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:03.635994 1685746 cri.go:96] found id: ""
	I1222 01:40:03.636019 1685746 logs.go:282] 0 containers: []
	W1222 01:40:03.636027 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:03.636033 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:03.636103 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:03.661008 1685746 cri.go:96] found id: ""
	I1222 01:40:03.661086 1685746 logs.go:282] 0 containers: []
	W1222 01:40:03.661109 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:03.661132 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:03.661249 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:03.685551 1685746 cri.go:96] found id: ""
	I1222 01:40:03.685577 1685746 logs.go:282] 0 containers: []
	W1222 01:40:03.685586 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:03.685594 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:03.685653 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:03.710025 1685746 cri.go:96] found id: ""
	I1222 01:40:03.710054 1685746 logs.go:282] 0 containers: []
	W1222 01:40:03.710063 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:03.710073 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:03.710109 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:40:03.748992 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:03.749066 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:03.812952 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:03.812990 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:40:03.828176 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:03.828207 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:03.895557 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:03.885860    2768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:03.886437    2768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:03.889658    2768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:03.890260    2768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:03.891906    2768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:03.885860    2768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:03.886437    2768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:03.889658    2768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:03.890260    2768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:03.891906    2768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:03.895583 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:03.895596 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:40:06.421124 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:06.432321 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:06.432435 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:06.458845 1685746 cri.go:96] found id: ""
	I1222 01:40:06.458926 1685746 logs.go:282] 0 containers: []
	W1222 01:40:06.458944 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:06.458951 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:06.459024 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:06.483853 1685746 cri.go:96] found id: ""
	I1222 01:40:06.483881 1685746 logs.go:282] 0 containers: []
	W1222 01:40:06.483890 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:06.483897 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:06.483956 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:06.518710 1685746 cri.go:96] found id: ""
	I1222 01:40:06.518741 1685746 logs.go:282] 0 containers: []
	W1222 01:40:06.518750 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:06.518757 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:06.518821 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:06.549152 1685746 cri.go:96] found id: ""
	I1222 01:40:06.549183 1685746 logs.go:282] 0 containers: []
	W1222 01:40:06.549191 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:06.549198 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:06.549256 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:06.579003 1685746 cri.go:96] found id: ""
	I1222 01:40:06.579032 1685746 logs.go:282] 0 containers: []
	W1222 01:40:06.579041 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:06.579048 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:06.579110 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:06.614999 1685746 cri.go:96] found id: ""
	I1222 01:40:06.615029 1685746 logs.go:282] 0 containers: []
	W1222 01:40:06.615038 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:06.615045 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:06.615109 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:06.644049 1685746 cri.go:96] found id: ""
	I1222 01:40:06.644073 1685746 logs.go:282] 0 containers: []
	W1222 01:40:06.644082 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:06.644088 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:06.644150 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:06.670551 1685746 cri.go:96] found id: ""
	I1222 01:40:06.670580 1685746 logs.go:282] 0 containers: []
	W1222 01:40:06.670590 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:06.670599 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:06.670630 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1222 01:40:03.248649 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:40:05.249130 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:40:07.749016 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:40:06.696127 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:06.696164 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:40:06.728583 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:06.728612 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:06.788068 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:06.788103 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:40:06.805676 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:06.805708 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:06.875097 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:06.866896    2880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:06.867468    2880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:06.869095    2880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:06.869679    2880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:06.871268    2880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:06.866896    2880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:06.867468    2880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:06.869095    2880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:06.869679    2880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:06.871268    2880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:09.375863 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:09.386805 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:09.386883 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:09.413272 1685746 cri.go:96] found id: ""
	I1222 01:40:09.413299 1685746 logs.go:282] 0 containers: []
	W1222 01:40:09.413307 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:09.413313 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:09.413374 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:09.438591 1685746 cri.go:96] found id: ""
	I1222 01:40:09.438615 1685746 logs.go:282] 0 containers: []
	W1222 01:40:09.438623 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:09.438630 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:09.438692 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:09.463919 1685746 cri.go:96] found id: ""
	I1222 01:40:09.463943 1685746 logs.go:282] 0 containers: []
	W1222 01:40:09.463952 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:09.463959 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:09.464026 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:09.493604 1685746 cri.go:96] found id: ""
	I1222 01:40:09.493627 1685746 logs.go:282] 0 containers: []
	W1222 01:40:09.493641 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:09.493648 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:09.493707 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:09.529370 1685746 cri.go:96] found id: ""
	I1222 01:40:09.529394 1685746 logs.go:282] 0 containers: []
	W1222 01:40:09.529404 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:09.529411 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:09.529477 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:09.562121 1685746 cri.go:96] found id: ""
	I1222 01:40:09.562150 1685746 logs.go:282] 0 containers: []
	W1222 01:40:09.562160 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:09.562167 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:09.562233 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:09.587896 1685746 cri.go:96] found id: ""
	I1222 01:40:09.587924 1685746 logs.go:282] 0 containers: []
	W1222 01:40:09.587935 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:09.587942 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:09.588010 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:09.613576 1685746 cri.go:96] found id: ""
	I1222 01:40:09.613600 1685746 logs.go:282] 0 containers: []
	W1222 01:40:09.613609 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:09.613619 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:09.613630 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:09.671590 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:09.671627 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:40:09.688438 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:09.688468 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:09.770484 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:09.761524    2979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:09.762731    2979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:09.764535    2979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:09.764916    2979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:09.766502    2979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:09.761524    2979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:09.762731    2979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:09.764535    2979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:09.764916    2979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:09.766502    2979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:09.770797 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:09.770834 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:40:09.803134 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:09.803237 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 01:40:10.247989 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:40:12.248610 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:40:12.334803 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:12.345660 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:12.345780 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:12.375026 1685746 cri.go:96] found id: ""
	I1222 01:40:12.375056 1685746 logs.go:282] 0 containers: []
	W1222 01:40:12.375067 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:12.375075 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:12.375154 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:12.400255 1685746 cri.go:96] found id: ""
	I1222 01:40:12.400282 1685746 logs.go:282] 0 containers: []
	W1222 01:40:12.400291 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:12.400299 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:12.400402 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:12.425430 1685746 cri.go:96] found id: ""
	I1222 01:40:12.425458 1685746 logs.go:282] 0 containers: []
	W1222 01:40:12.425467 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:12.425474 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:12.425535 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:12.450734 1685746 cri.go:96] found id: ""
	I1222 01:40:12.450816 1685746 logs.go:282] 0 containers: []
	W1222 01:40:12.450832 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:12.450841 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:12.450918 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:12.477690 1685746 cri.go:96] found id: ""
	I1222 01:40:12.477719 1685746 logs.go:282] 0 containers: []
	W1222 01:40:12.477735 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:12.477742 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:12.477803 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:12.517751 1685746 cri.go:96] found id: ""
	I1222 01:40:12.517779 1685746 logs.go:282] 0 containers: []
	W1222 01:40:12.517787 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:12.517794 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:12.517858 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:12.544749 1685746 cri.go:96] found id: ""
	I1222 01:40:12.544777 1685746 logs.go:282] 0 containers: []
	W1222 01:40:12.544786 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:12.544793 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:12.544858 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:12.576758 1685746 cri.go:96] found id: ""
	I1222 01:40:12.576786 1685746 logs.go:282] 0 containers: []
	W1222 01:40:12.576795 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:12.576805 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:12.576816 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:40:12.592450 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:12.592478 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:12.658073 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:12.649657    3090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:12.650391    3090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:12.652114    3090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:12.652710    3090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:12.654319    3090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:12.649657    3090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:12.650391    3090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:12.652114    3090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:12.652710    3090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:12.654319    3090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:12.658125 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:12.658138 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:40:12.683599 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:12.683637 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:40:12.715675 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:12.715707 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:15.275108 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:15.285651 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:15.285724 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:15.311249 1685746 cri.go:96] found id: ""
	I1222 01:40:15.311277 1685746 logs.go:282] 0 containers: []
	W1222 01:40:15.311287 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:15.311293 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:15.311353 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:15.336192 1685746 cri.go:96] found id: ""
	I1222 01:40:15.336218 1685746 logs.go:282] 0 containers: []
	W1222 01:40:15.336226 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:15.336234 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:15.336297 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:15.362231 1685746 cri.go:96] found id: ""
	I1222 01:40:15.362254 1685746 logs.go:282] 0 containers: []
	W1222 01:40:15.362263 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:15.362269 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:15.362331 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:15.390149 1685746 cri.go:96] found id: ""
	I1222 01:40:15.390176 1685746 logs.go:282] 0 containers: []
	W1222 01:40:15.390185 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:15.390192 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:15.390259 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:15.417421 1685746 cri.go:96] found id: ""
	I1222 01:40:15.417446 1685746 logs.go:282] 0 containers: []
	W1222 01:40:15.417456 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:15.417464 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:15.417530 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:15.444318 1685746 cri.go:96] found id: ""
	I1222 01:40:15.444346 1685746 logs.go:282] 0 containers: []
	W1222 01:40:15.444356 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:15.444368 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:15.444428 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:15.469475 1685746 cri.go:96] found id: ""
	I1222 01:40:15.469503 1685746 logs.go:282] 0 containers: []
	W1222 01:40:15.469512 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:15.469520 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:15.469581 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:15.501561 1685746 cri.go:96] found id: ""
	I1222 01:40:15.501588 1685746 logs.go:282] 0 containers: []
	W1222 01:40:15.501597 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:15.501606 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:15.501637 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:40:15.518032 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:15.518062 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:15.588024 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:15.580432    3204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:15.581037    3204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:15.582843    3204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:15.583305    3204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:15.584373    3204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:15.580432    3204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:15.581037    3204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:15.582843    3204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:15.583305    3204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:15.584373    3204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:15.588049 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:15.588062 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:40:15.613914 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:15.613953 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:40:15.645712 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:15.645739 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1222 01:40:14.747949 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:40:16.749012 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:40:18.200926 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:18.211578 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:18.211651 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:18.237396 1685746 cri.go:96] found id: ""
	I1222 01:40:18.237421 1685746 logs.go:282] 0 containers: []
	W1222 01:40:18.237429 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:18.237436 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:18.237503 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:18.264313 1685746 cri.go:96] found id: ""
	I1222 01:40:18.264345 1685746 logs.go:282] 0 containers: []
	W1222 01:40:18.264356 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:18.264369 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:18.264451 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:18.290240 1685746 cri.go:96] found id: ""
	I1222 01:40:18.290265 1685746 logs.go:282] 0 containers: []
	W1222 01:40:18.290274 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:18.290281 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:18.290340 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:18.315874 1685746 cri.go:96] found id: ""
	I1222 01:40:18.315898 1685746 logs.go:282] 0 containers: []
	W1222 01:40:18.315907 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:18.315914 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:18.315975 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:18.340813 1685746 cri.go:96] found id: ""
	I1222 01:40:18.340836 1685746 logs.go:282] 0 containers: []
	W1222 01:40:18.340844 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:18.340852 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:18.340912 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:18.368094 1685746 cri.go:96] found id: ""
	I1222 01:40:18.368119 1685746 logs.go:282] 0 containers: []
	W1222 01:40:18.368128 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:18.368135 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:18.368251 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:18.393525 1685746 cri.go:96] found id: ""
	I1222 01:40:18.393551 1685746 logs.go:282] 0 containers: []
	W1222 01:40:18.393559 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:18.393566 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:18.393629 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:18.419984 1685746 cri.go:96] found id: ""
	I1222 01:40:18.420011 1685746 logs.go:282] 0 containers: []
	W1222 01:40:18.420020 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:18.420031 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:18.420043 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:40:18.435061 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:18.435090 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:18.511216 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:18.502272    3309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:18.503063    3309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:18.504790    3309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:18.505424    3309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:18.507123    3309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:18.502272    3309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:18.503063    3309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:18.504790    3309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:18.505424    3309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:18.507123    3309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:18.511242 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:18.511258 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:40:18.539215 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:18.539253 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:40:18.571721 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:18.571752 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:21.133335 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:21.144470 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:21.144552 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:21.170402 1685746 cri.go:96] found id: ""
	I1222 01:40:21.170435 1685746 logs.go:282] 0 containers: []
	W1222 01:40:21.170444 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:21.170451 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:21.170514 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:21.197647 1685746 cri.go:96] found id: ""
	I1222 01:40:21.197674 1685746 logs.go:282] 0 containers: []
	W1222 01:40:21.197683 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:21.197690 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:21.197754 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:21.231085 1685746 cri.go:96] found id: ""
	I1222 01:40:21.231120 1685746 logs.go:282] 0 containers: []
	W1222 01:40:21.231130 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:21.231137 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:21.231243 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:21.268085 1685746 cri.go:96] found id: ""
	I1222 01:40:21.268112 1685746 logs.go:282] 0 containers: []
	W1222 01:40:21.268121 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:21.268129 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:21.268195 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:21.293752 1685746 cri.go:96] found id: ""
	I1222 01:40:21.293781 1685746 logs.go:282] 0 containers: []
	W1222 01:40:21.293791 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:21.293797 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:21.293864 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:21.320171 1685746 cri.go:96] found id: ""
	I1222 01:40:21.320195 1685746 logs.go:282] 0 containers: []
	W1222 01:40:21.320203 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:21.320210 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:21.320273 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:21.346069 1685746 cri.go:96] found id: ""
	I1222 01:40:21.346162 1685746 logs.go:282] 0 containers: []
	W1222 01:40:21.346177 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:21.346185 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:21.346246 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:21.371416 1685746 cri.go:96] found id: ""
	I1222 01:40:21.371443 1685746 logs.go:282] 0 containers: []
	W1222 01:40:21.371452 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:21.371462 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:21.371475 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:40:21.404674 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:21.404703 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:21.460348 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:21.460388 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:40:21.475958 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:21.475994 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:21.561495 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:21.552344    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:21.553051    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:21.555518    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:21.555905    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:21.557453    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:21.552344    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:21.553051    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:21.555518    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:21.555905    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:21.557453    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:21.561520 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:21.561533 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1222 01:40:19.248000 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:40:21.248090 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:40:24.089244 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:24.100814 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:24.100889 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:24.126847 1685746 cri.go:96] found id: ""
	I1222 01:40:24.126878 1685746 logs.go:282] 0 containers: []
	W1222 01:40:24.126888 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:24.126895 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:24.126959 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:24.152740 1685746 cri.go:96] found id: ""
	I1222 01:40:24.152768 1685746 logs.go:282] 0 containers: []
	W1222 01:40:24.152778 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:24.152784 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:24.152845 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:24.178506 1685746 cri.go:96] found id: ""
	I1222 01:40:24.178532 1685746 logs.go:282] 0 containers: []
	W1222 01:40:24.178540 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:24.178547 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:24.178628 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:24.210111 1685746 cri.go:96] found id: ""
	I1222 01:40:24.210138 1685746 logs.go:282] 0 containers: []
	W1222 01:40:24.210147 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:24.210156 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:24.210219 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:24.234336 1685746 cri.go:96] found id: ""
	I1222 01:40:24.234358 1685746 logs.go:282] 0 containers: []
	W1222 01:40:24.234372 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:24.234379 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:24.234440 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:24.259792 1685746 cri.go:96] found id: ""
	I1222 01:40:24.259861 1685746 logs.go:282] 0 containers: []
	W1222 01:40:24.259884 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:24.259898 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:24.259973 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:24.285594 1685746 cri.go:96] found id: ""
	I1222 01:40:24.285623 1685746 logs.go:282] 0 containers: []
	W1222 01:40:24.285632 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:24.285639 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:24.285722 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:24.312027 1685746 cri.go:96] found id: ""
	I1222 01:40:24.312055 1685746 logs.go:282] 0 containers: []
	W1222 01:40:24.312064 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:24.312074 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:24.312088 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:40:24.345845 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:24.345873 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:24.404101 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:24.404140 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:40:24.419436 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:24.419465 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:24.485147 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:24.477123    3551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:24.477548    3551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:24.479053    3551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:24.479387    3551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:24.480817    3551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:24.477123    3551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:24.477548    3551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:24.479053    3551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:24.479387    3551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:24.480817    3551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:24.485182 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:24.485195 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:40:25.275578 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:40:25.338578 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:40:25.338685 1685746 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1222 01:40:23.248426 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:40:25.748112 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:40:27.748979 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:40:27.016338 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:27.030615 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:27.030685 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:27.060751 1685746 cri.go:96] found id: ""
	I1222 01:40:27.060775 1685746 logs.go:282] 0 containers: []
	W1222 01:40:27.060784 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:27.060791 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:27.060850 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:27.088784 1685746 cri.go:96] found id: ""
	I1222 01:40:27.088807 1685746 logs.go:282] 0 containers: []
	W1222 01:40:27.088816 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:27.088822 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:27.088889 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:27.115559 1685746 cri.go:96] found id: ""
	I1222 01:40:27.115581 1685746 logs.go:282] 0 containers: []
	W1222 01:40:27.115590 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:27.115596 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:27.115658 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:27.141509 1685746 cri.go:96] found id: ""
	I1222 01:40:27.141579 1685746 logs.go:282] 0 containers: []
	W1222 01:40:27.141602 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:27.141624 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:27.141712 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:27.168944 1685746 cri.go:96] found id: ""
	I1222 01:40:27.168984 1685746 logs.go:282] 0 containers: []
	W1222 01:40:27.168993 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:27.169006 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:27.169076 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:27.194554 1685746 cri.go:96] found id: ""
	I1222 01:40:27.194584 1685746 logs.go:282] 0 containers: []
	W1222 01:40:27.194593 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:27.194599 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:27.194662 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:27.219603 1685746 cri.go:96] found id: ""
	I1222 01:40:27.219684 1685746 logs.go:282] 0 containers: []
	W1222 01:40:27.219707 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:27.219721 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:27.219801 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:27.246999 1685746 cri.go:96] found id: ""
	I1222 01:40:27.247033 1685746 logs.go:282] 0 containers: []
	W1222 01:40:27.247042 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:27.247067 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:27.247087 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:27.302977 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:27.303012 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:40:27.318364 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:27.318398 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:27.385339 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:27.376631    3659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:27.377224    3659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:27.379740    3659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:27.380579    3659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:27.381699    3659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:27.376631    3659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:27.377224    3659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:27.379740    3659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:27.380579    3659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:27.381699    3659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:27.385413 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:27.385442 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:40:27.411346 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:27.411384 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:40:29.941731 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:29.955808 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:29.955883 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:29.982684 1685746 cri.go:96] found id: ""
	I1222 01:40:29.982709 1685746 logs.go:282] 0 containers: []
	W1222 01:40:29.982718 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:29.982725 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:29.982796 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:30.036793 1685746 cri.go:96] found id: ""
	I1222 01:40:30.036836 1685746 logs.go:282] 0 containers: []
	W1222 01:40:30.036847 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:30.036858 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:30.036986 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:30.127706 1685746 cri.go:96] found id: ""
	I1222 01:40:30.127740 1685746 logs.go:282] 0 containers: []
	W1222 01:40:30.127750 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:30.127757 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:30.127828 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:30.158476 1685746 cri.go:96] found id: ""
	I1222 01:40:30.158509 1685746 logs.go:282] 0 containers: []
	W1222 01:40:30.158521 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:30.158529 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:30.158598 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:30.187425 1685746 cri.go:96] found id: ""
	I1222 01:40:30.187453 1685746 logs.go:282] 0 containers: []
	W1222 01:40:30.187463 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:30.187470 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:30.187539 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:30.216013 1685746 cri.go:96] found id: ""
	I1222 01:40:30.216043 1685746 logs.go:282] 0 containers: []
	W1222 01:40:30.216052 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:30.216060 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:30.216125 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:30.241947 1685746 cri.go:96] found id: ""
	I1222 01:40:30.241975 1685746 logs.go:282] 0 containers: []
	W1222 01:40:30.241985 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:30.241991 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:30.242074 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:30.271569 1685746 cri.go:96] found id: ""
	I1222 01:40:30.271595 1685746 logs.go:282] 0 containers: []
	W1222 01:40:30.271603 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:30.271613 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:30.271625 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:30.327858 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:30.327896 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:40:30.343479 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:30.343505 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:30.411657 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:30.402221    3771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:30.402895    3771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:30.404480    3771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:30.404791    3771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:30.407032    3771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:30.402221    3771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:30.402895    3771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:30.404480    3771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:30.404791    3771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:30.407032    3771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:30.411678 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:30.411692 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:40:30.436851 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:30.436886 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:40:30.511390 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:40:30.582457 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:40:30.582560 1685746 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1222 01:40:30.587532 1685746 out.go:179] * Enabled addons: 
	I1222 01:40:30.590426 1685746 addons.go:530] duration metric: took 1m51.812167431s for enable addons: enabled=[]
	W1222 01:40:30.247997 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:40:32.248097 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:40:32.969406 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:32.980360 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:32.980444 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:33.016753 1685746 cri.go:96] found id: ""
	I1222 01:40:33.016778 1685746 logs.go:282] 0 containers: []
	W1222 01:40:33.016787 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:33.016795 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:33.016881 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:33.053288 1685746 cri.go:96] found id: ""
	I1222 01:40:33.053315 1685746 logs.go:282] 0 containers: []
	W1222 01:40:33.053334 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:33.053358 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:33.053457 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:33.087392 1685746 cri.go:96] found id: ""
	I1222 01:40:33.087417 1685746 logs.go:282] 0 containers: []
	W1222 01:40:33.087426 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:33.087432 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:33.087492 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:33.113564 1685746 cri.go:96] found id: ""
	I1222 01:40:33.113595 1685746 logs.go:282] 0 containers: []
	W1222 01:40:33.113604 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:33.113611 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:33.113698 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:33.143733 1685746 cri.go:96] found id: ""
	I1222 01:40:33.143757 1685746 logs.go:282] 0 containers: []
	W1222 01:40:33.143766 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:33.143772 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:33.143835 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:33.169776 1685746 cri.go:96] found id: ""
	I1222 01:40:33.169808 1685746 logs.go:282] 0 containers: []
	W1222 01:40:33.169816 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:33.169824 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:33.169887 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:33.198413 1685746 cri.go:96] found id: ""
	I1222 01:40:33.198438 1685746 logs.go:282] 0 containers: []
	W1222 01:40:33.198446 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:33.198453 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:33.198514 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:33.223746 1685746 cri.go:96] found id: ""
	I1222 01:40:33.223816 1685746 logs.go:282] 0 containers: []
	W1222 01:40:33.223838 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:33.223855 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:33.223866 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:40:33.249217 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:33.249247 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:40:33.282243 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:33.282269 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:33.340677 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:33.340714 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:40:33.355635 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:33.355667 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:33.438690 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:33.431030    3900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:33.431616    3900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:33.433134    3900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:33.433634    3900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:33.435107    3900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:33.431030    3900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:33.431616    3900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:33.433134    3900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:33.433634    3900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:33.435107    3900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:35.940454 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:35.954241 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:35.954312 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:35.979549 1685746 cri.go:96] found id: ""
	I1222 01:40:35.979576 1685746 logs.go:282] 0 containers: []
	W1222 01:40:35.979585 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:35.979592 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:35.979654 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:36.010177 1685746 cri.go:96] found id: ""
	I1222 01:40:36.010207 1685746 logs.go:282] 0 containers: []
	W1222 01:40:36.010217 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:36.010224 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:36.010295 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:36.045048 1685746 cri.go:96] found id: ""
	I1222 01:40:36.045078 1685746 logs.go:282] 0 containers: []
	W1222 01:40:36.045088 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:36.045095 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:36.045157 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:36.074866 1685746 cri.go:96] found id: ""
	I1222 01:40:36.074889 1685746 logs.go:282] 0 containers: []
	W1222 01:40:36.074897 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:36.074903 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:36.074965 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:36.101425 1685746 cri.go:96] found id: ""
	I1222 01:40:36.101499 1685746 logs.go:282] 0 containers: []
	W1222 01:40:36.101511 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:36.101518 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:36.106750 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:36.134167 1685746 cri.go:96] found id: ""
	I1222 01:40:36.134205 1685746 logs.go:282] 0 containers: []
	W1222 01:40:36.134215 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:36.134223 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:36.134288 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:36.159767 1685746 cri.go:96] found id: ""
	I1222 01:40:36.159792 1685746 logs.go:282] 0 containers: []
	W1222 01:40:36.159802 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:36.159809 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:36.159873 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:36.188878 1685746 cri.go:96] found id: ""
	I1222 01:40:36.188907 1685746 logs.go:282] 0 containers: []
	W1222 01:40:36.188917 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:36.188928 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:36.188941 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:36.253797 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:36.245184    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:36.246059    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:36.247790    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:36.248134    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:36.249657    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:36.245184    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:36.246059    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:36.247790    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:36.248134    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:36.249657    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:36.253877 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:36.253906 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:40:36.279371 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:36.279408 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:40:36.308866 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:36.308901 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:36.365568 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:36.365603 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1222 01:40:34.248867 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:40:36.748755 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:40:38.881766 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:38.892862 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:38.892944 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:38.919366 1685746 cri.go:96] found id: ""
	I1222 01:40:38.919399 1685746 logs.go:282] 0 containers: []
	W1222 01:40:38.919409 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:38.919421 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:38.919495 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:38.953015 1685746 cri.go:96] found id: ""
	I1222 01:40:38.953042 1685746 logs.go:282] 0 containers: []
	W1222 01:40:38.953051 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:38.953058 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:38.953121 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:38.979133 1685746 cri.go:96] found id: ""
	I1222 01:40:38.979158 1685746 logs.go:282] 0 containers: []
	W1222 01:40:38.979167 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:38.979173 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:38.979236 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:39.017688 1685746 cri.go:96] found id: ""
	I1222 01:40:39.017714 1685746 logs.go:282] 0 containers: []
	W1222 01:40:39.017724 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:39.017735 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:39.017797 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:39.056591 1685746 cri.go:96] found id: ""
	I1222 01:40:39.056614 1685746 logs.go:282] 0 containers: []
	W1222 01:40:39.056622 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:39.056629 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:39.056686 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:39.085085 1685746 cri.go:96] found id: ""
	I1222 01:40:39.085155 1685746 logs.go:282] 0 containers: []
	W1222 01:40:39.085177 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:39.085199 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:39.085296 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:39.114614 1685746 cri.go:96] found id: ""
	I1222 01:40:39.114640 1685746 logs.go:282] 0 containers: []
	W1222 01:40:39.114649 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:39.114656 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:39.114738 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:39.140466 1685746 cri.go:96] found id: ""
	I1222 01:40:39.140511 1685746 logs.go:282] 0 containers: []
	W1222 01:40:39.140520 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:39.140545 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:39.140564 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:39.208956 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:39.200655    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:39.201282    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:39.202774    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:39.203325    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:39.204797    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:39.200655    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:39.201282    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:39.202774    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:39.203325    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:39.204797    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:39.208979 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:39.208992 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:40:39.234396 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:39.234430 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:40:39.264983 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:39.265011 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:39.320138 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:39.320173 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1222 01:40:38.748943 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:40:41.248791 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:40:41.835978 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:41.846958 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:41.847061 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:41.872281 1685746 cri.go:96] found id: ""
	I1222 01:40:41.872307 1685746 logs.go:282] 0 containers: []
	W1222 01:40:41.872318 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:41.872324 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:41.872429 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:41.902068 1685746 cri.go:96] found id: ""
	I1222 01:40:41.902127 1685746 logs.go:282] 0 containers: []
	W1222 01:40:41.902137 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:41.902163 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:41.902275 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:41.936505 1685746 cri.go:96] found id: ""
	I1222 01:40:41.936535 1685746 logs.go:282] 0 containers: []
	W1222 01:40:41.936544 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:41.936550 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:41.936615 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:41.961446 1685746 cri.go:96] found id: ""
	I1222 01:40:41.961480 1685746 logs.go:282] 0 containers: []
	W1222 01:40:41.961489 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:41.961496 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:41.961569 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:41.989500 1685746 cri.go:96] found id: ""
	I1222 01:40:41.989582 1685746 logs.go:282] 0 containers: []
	W1222 01:40:41.989606 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:41.989631 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:41.989730 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:42.028918 1685746 cri.go:96] found id: ""
	I1222 01:40:42.028947 1685746 logs.go:282] 0 containers: []
	W1222 01:40:42.028956 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:42.028963 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:42.029037 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:42.065570 1685746 cri.go:96] found id: ""
	I1222 01:40:42.065618 1685746 logs.go:282] 0 containers: []
	W1222 01:40:42.065633 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:42.065641 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:42.065724 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:42.095634 1685746 cri.go:96] found id: ""
	I1222 01:40:42.095661 1685746 logs.go:282] 0 containers: []
	W1222 01:40:42.095671 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:42.095681 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:42.095702 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:42.158126 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:42.158170 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:40:42.175600 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:42.175640 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:42.256856 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:42.248823    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:42.249305    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:42.250890    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:42.251275    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:42.252835    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:42.248823    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:42.249305    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:42.250890    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:42.251275    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:42.252835    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:42.256882 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:42.256896 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:40:42.283618 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:42.283665 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:40:44.813189 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:44.824766 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:44.824836 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:44.853167 1685746 cri.go:96] found id: ""
	I1222 01:40:44.853192 1685746 logs.go:282] 0 containers: []
	W1222 01:40:44.853201 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:44.853208 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:44.853269 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:44.878679 1685746 cri.go:96] found id: ""
	I1222 01:40:44.878711 1685746 logs.go:282] 0 containers: []
	W1222 01:40:44.878721 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:44.878728 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:44.878792 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:44.905070 1685746 cri.go:96] found id: ""
	I1222 01:40:44.905097 1685746 logs.go:282] 0 containers: []
	W1222 01:40:44.905106 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:44.905113 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:44.905177 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:44.930494 1685746 cri.go:96] found id: ""
	I1222 01:40:44.930523 1685746 logs.go:282] 0 containers: []
	W1222 01:40:44.930533 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:44.930539 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:44.930599 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:44.960159 1685746 cri.go:96] found id: ""
	I1222 01:40:44.960187 1685746 logs.go:282] 0 containers: []
	W1222 01:40:44.960196 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:44.960203 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:44.960308 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:44.985038 1685746 cri.go:96] found id: ""
	I1222 01:40:44.985066 1685746 logs.go:282] 0 containers: []
	W1222 01:40:44.985076 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:44.985083 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:44.985147 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:45.046474 1685746 cri.go:96] found id: ""
	I1222 01:40:45.046501 1685746 logs.go:282] 0 containers: []
	W1222 01:40:45.046511 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:45.046518 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:45.046590 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:45.111231 1685746 cri.go:96] found id: ""
	I1222 01:40:45.111266 1685746 logs.go:282] 0 containers: []
	W1222 01:40:45.111275 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:45.111286 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:45.111299 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:45.180293 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:45.180418 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:40:45.231743 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:45.231786 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:45.318004 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:45.307287    4338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:45.308090    4338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:45.309847    4338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:45.310539    4338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:45.312128    4338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:45.307287    4338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:45.308090    4338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:45.309847    4338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:45.310539    4338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:45.312128    4338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:45.318031 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:45.318045 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:40:45.351434 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:45.351474 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 01:40:43.748820 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:40:45.748974 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:40:47.885492 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:47.896303 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:47.896380 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:47.927221 1685746 cri.go:96] found id: ""
	I1222 01:40:47.927247 1685746 logs.go:282] 0 containers: []
	W1222 01:40:47.927257 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:47.927264 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:47.927326 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:47.955055 1685746 cri.go:96] found id: ""
	I1222 01:40:47.955082 1685746 logs.go:282] 0 containers: []
	W1222 01:40:47.955091 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:47.955098 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:47.955167 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:47.982730 1685746 cri.go:96] found id: ""
	I1222 01:40:47.982760 1685746 logs.go:282] 0 containers: []
	W1222 01:40:47.982770 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:47.982777 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:47.982841 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:48.013060 1685746 cri.go:96] found id: ""
	I1222 01:40:48.013093 1685746 logs.go:282] 0 containers: []
	W1222 01:40:48.013104 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:48.013111 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:48.013184 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:48.044824 1685746 cri.go:96] found id: ""
	I1222 01:40:48.044902 1685746 logs.go:282] 0 containers: []
	W1222 01:40:48.044918 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:48.044926 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:48.044994 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:48.077777 1685746 cri.go:96] found id: ""
	I1222 01:40:48.077806 1685746 logs.go:282] 0 containers: []
	W1222 01:40:48.077816 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:48.077822 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:48.077887 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:48.108631 1685746 cri.go:96] found id: ""
	I1222 01:40:48.108659 1685746 logs.go:282] 0 containers: []
	W1222 01:40:48.108669 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:48.108676 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:48.108767 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:48.135002 1685746 cri.go:96] found id: ""
	I1222 01:40:48.135035 1685746 logs.go:282] 0 containers: []
	W1222 01:40:48.135045 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:48.135056 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:48.135092 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:48.192262 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:48.192299 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:40:48.207972 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:48.208074 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:48.295537 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:48.286431    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:48.287217    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:48.288971    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:48.289714    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:48.290868    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:48.286431    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:48.287217    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:48.288971    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:48.289714    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:48.290868    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:48.295563 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:48.295583 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:40:48.322629 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:48.322665 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:40:50.857236 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:50.868315 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:50.868396 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:50.894289 1685746 cri.go:96] found id: ""
	I1222 01:40:50.894337 1685746 logs.go:282] 0 containers: []
	W1222 01:40:50.894346 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:50.894353 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:50.894414 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:50.920265 1685746 cri.go:96] found id: ""
	I1222 01:40:50.920288 1685746 logs.go:282] 0 containers: []
	W1222 01:40:50.920297 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:50.920303 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:50.920362 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:50.946413 1685746 cri.go:96] found id: ""
	I1222 01:40:50.946437 1685746 logs.go:282] 0 containers: []
	W1222 01:40:50.946445 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:50.946452 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:50.946511 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:50.973167 1685746 cri.go:96] found id: ""
	I1222 01:40:50.973192 1685746 logs.go:282] 0 containers: []
	W1222 01:40:50.973202 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:50.973209 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:50.973278 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:50.998695 1685746 cri.go:96] found id: ""
	I1222 01:40:50.998730 1685746 logs.go:282] 0 containers: []
	W1222 01:40:50.998739 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:50.998746 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:50.998812 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:51.027679 1685746 cri.go:96] found id: ""
	I1222 01:40:51.027748 1685746 logs.go:282] 0 containers: []
	W1222 01:40:51.027770 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:51.027792 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:51.027882 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:51.057709 1685746 cri.go:96] found id: ""
	I1222 01:40:51.057791 1685746 logs.go:282] 0 containers: []
	W1222 01:40:51.057816 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:51.057839 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:51.057933 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:51.085239 1685746 cri.go:96] found id: ""
	I1222 01:40:51.085311 1685746 logs.go:282] 0 containers: []
	W1222 01:40:51.085335 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:51.085361 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:51.085402 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:51.143088 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:51.143131 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:40:51.159838 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:51.159866 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:51.229894 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:51.221124    4573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:51.221892    4573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:51.223547    4573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:51.224109    4573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:51.225453    4573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:51.221124    4573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:51.221892    4573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:51.223547    4573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:51.224109    4573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:51.225453    4573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:51.229917 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:51.229932 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:40:51.258211 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:51.258321 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 01:40:48.248802 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:40:50.748310 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:40:53.799763 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:53.811321 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:53.811400 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:53.838808 1685746 cri.go:96] found id: ""
	I1222 01:40:53.838834 1685746 logs.go:282] 0 containers: []
	W1222 01:40:53.838844 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:53.838851 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:53.838918 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:53.865906 1685746 cri.go:96] found id: ""
	I1222 01:40:53.865930 1685746 logs.go:282] 0 containers: []
	W1222 01:40:53.865938 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:53.865945 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:53.866008 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:53.891986 1685746 cri.go:96] found id: ""
	I1222 01:40:53.892030 1685746 logs.go:282] 0 containers: []
	W1222 01:40:53.892040 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:53.892047 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:53.892120 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:53.918633 1685746 cri.go:96] found id: ""
	I1222 01:40:53.918660 1685746 logs.go:282] 0 containers: []
	W1222 01:40:53.918670 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:53.918677 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:53.918748 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:53.945224 1685746 cri.go:96] found id: ""
	I1222 01:40:53.945259 1685746 logs.go:282] 0 containers: []
	W1222 01:40:53.945268 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:53.945274 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:53.945345 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:53.976181 1685746 cri.go:96] found id: ""
	I1222 01:40:53.976207 1685746 logs.go:282] 0 containers: []
	W1222 01:40:53.976216 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:53.976223 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:53.976286 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:54.017529 1685746 cri.go:96] found id: ""
	I1222 01:40:54.017609 1685746 logs.go:282] 0 containers: []
	W1222 01:40:54.017633 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:54.017657 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:54.017766 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:54.050157 1685746 cri.go:96] found id: ""
	I1222 01:40:54.050234 1685746 logs.go:282] 0 containers: []
	W1222 01:40:54.050257 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:54.050284 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:54.050322 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:54.107873 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:54.107911 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:40:54.123115 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:54.123192 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:54.189938 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:54.180462    4689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:54.181036    4689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:54.182810    4689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:54.183522    4689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:54.185321    4689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:54.180462    4689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:54.181036    4689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:54.182810    4689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:54.183522    4689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:54.185321    4689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:54.189963 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:54.189976 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:40:54.216904 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:54.216959 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 01:40:53.248434 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:40:55.748007 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:40:57.748191 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:40:56.757953 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:56.769647 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:56.769793 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:56.802913 1685746 cri.go:96] found id: ""
	I1222 01:40:56.802941 1685746 logs.go:282] 0 containers: []
	W1222 01:40:56.802951 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:56.802958 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:56.803018 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:56.828625 1685746 cri.go:96] found id: ""
	I1222 01:40:56.828654 1685746 logs.go:282] 0 containers: []
	W1222 01:40:56.828664 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:56.828671 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:56.828734 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:56.853350 1685746 cri.go:96] found id: ""
	I1222 01:40:56.853378 1685746 logs.go:282] 0 containers: []
	W1222 01:40:56.853388 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:56.853394 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:56.853456 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:56.883418 1685746 cri.go:96] found id: ""
	I1222 01:40:56.883443 1685746 logs.go:282] 0 containers: []
	W1222 01:40:56.883458 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:56.883466 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:56.883532 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:56.912769 1685746 cri.go:96] found id: ""
	I1222 01:40:56.912799 1685746 logs.go:282] 0 containers: []
	W1222 01:40:56.912809 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:56.912817 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:56.912880 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:56.938494 1685746 cri.go:96] found id: ""
	I1222 01:40:56.938519 1685746 logs.go:282] 0 containers: []
	W1222 01:40:56.938529 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:56.938536 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:56.938602 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:56.968944 1685746 cri.go:96] found id: ""
	I1222 01:40:56.968978 1685746 logs.go:282] 0 containers: []
	W1222 01:40:56.968987 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:56.968994 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:56.969063 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:56.995238 1685746 cri.go:96] found id: ""
	I1222 01:40:56.995265 1685746 logs.go:282] 0 containers: []
	W1222 01:40:56.995274 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:56.995284 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:56.995295 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:40:57.022601 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:57.022641 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:40:57.055915 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:57.055993 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:57.110958 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:57.110993 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:40:57.126557 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:57.126587 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:57.199192 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:57.188757    4814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:57.189807    4814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:57.191598    4814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:57.192842    4814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:57.194605    4814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:57.188757    4814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:57.189807    4814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:57.191598    4814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:57.192842    4814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:57.194605    4814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:59.699460 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:59.709928 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:59.709999 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:59.734831 1685746 cri.go:96] found id: ""
	I1222 01:40:59.734861 1685746 logs.go:282] 0 containers: []
	W1222 01:40:59.734870 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:59.734876 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:59.734939 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:59.766737 1685746 cri.go:96] found id: ""
	I1222 01:40:59.766765 1685746 logs.go:282] 0 containers: []
	W1222 01:40:59.766773 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:59.766785 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:59.766845 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:59.800714 1685746 cri.go:96] found id: ""
	I1222 01:40:59.800742 1685746 logs.go:282] 0 containers: []
	W1222 01:40:59.800751 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:59.800757 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:59.800817 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:59.828842 1685746 cri.go:96] found id: ""
	I1222 01:40:59.828871 1685746 logs.go:282] 0 containers: []
	W1222 01:40:59.828880 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:59.828888 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:59.828951 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:59.854824 1685746 cri.go:96] found id: ""
	I1222 01:40:59.854848 1685746 logs.go:282] 0 containers: []
	W1222 01:40:59.854857 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:59.854864 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:59.854928 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:59.879691 1685746 cri.go:96] found id: ""
	I1222 01:40:59.879761 1685746 logs.go:282] 0 containers: []
	W1222 01:40:59.879784 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:59.879798 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:59.879874 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:59.905099 1685746 cri.go:96] found id: ""
	I1222 01:40:59.905136 1685746 logs.go:282] 0 containers: []
	W1222 01:40:59.905146 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:59.905152 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:59.905232 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:59.929727 1685746 cri.go:96] found id: ""
	I1222 01:40:59.929763 1685746 logs.go:282] 0 containers: []
	W1222 01:40:59.929775 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:59.929784 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:59.929794 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:59.985430 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:59.985466 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:00.001212 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:00.001238 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:00.267041 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:00.255488    4915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:00.258122    4915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:00.259215    4915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:00.260139    4915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:00.261094    4915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:00.255488    4915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:00.258122    4915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:00.259215    4915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:00.260139    4915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:00.261094    4915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:00.267072 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:00.267085 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:00.299707 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:00.299756 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 01:41:00.248610 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:41:02.248653 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:41:02.866175 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:02.877065 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:02.877139 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:02.902030 1685746 cri.go:96] found id: ""
	I1222 01:41:02.902137 1685746 logs.go:282] 0 containers: []
	W1222 01:41:02.902161 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:02.902183 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:02.902277 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:02.928023 1685746 cri.go:96] found id: ""
	I1222 01:41:02.928048 1685746 logs.go:282] 0 containers: []
	W1222 01:41:02.928058 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:02.928065 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:02.928128 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:02.958559 1685746 cri.go:96] found id: ""
	I1222 01:41:02.958595 1685746 logs.go:282] 0 containers: []
	W1222 01:41:02.958605 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:02.958612 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:02.958675 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:02.984249 1685746 cri.go:96] found id: ""
	I1222 01:41:02.984272 1685746 logs.go:282] 0 containers: []
	W1222 01:41:02.984281 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:02.984287 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:02.984355 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:03.033125 1685746 cri.go:96] found id: ""
	I1222 01:41:03.033152 1685746 logs.go:282] 0 containers: []
	W1222 01:41:03.033161 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:03.033167 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:03.033228 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:03.058557 1685746 cri.go:96] found id: ""
	I1222 01:41:03.058583 1685746 logs.go:282] 0 containers: []
	W1222 01:41:03.058591 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:03.058598 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:03.058657 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:03.089068 1685746 cri.go:96] found id: ""
	I1222 01:41:03.089112 1685746 logs.go:282] 0 containers: []
	W1222 01:41:03.089122 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:03.089132 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:03.089210 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:03.119177 1685746 cri.go:96] found id: ""
	I1222 01:41:03.119201 1685746 logs.go:282] 0 containers: []
	W1222 01:41:03.119210 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:03.119220 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:03.119231 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:03.182970 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:03.173888    5021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:03.174799    5021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:03.176762    5021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:03.177121    5021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:03.178608    5021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:03.173888    5021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:03.174799    5021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:03.176762    5021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:03.177121    5021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:03.178608    5021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:03.183000 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:03.183013 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:03.207694 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:03.207726 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:41:03.238481 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:03.238559 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:03.311496 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:03.311531 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:05.829656 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:05.840301 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:05.840394 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:05.867057 1685746 cri.go:96] found id: ""
	I1222 01:41:05.867080 1685746 logs.go:282] 0 containers: []
	W1222 01:41:05.867089 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:05.867095 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:05.867155 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:05.897184 1685746 cri.go:96] found id: ""
	I1222 01:41:05.897206 1685746 logs.go:282] 0 containers: []
	W1222 01:41:05.897215 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:05.897221 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:05.897284 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:05.922902 1685746 cri.go:96] found id: ""
	I1222 01:41:05.922924 1685746 logs.go:282] 0 containers: []
	W1222 01:41:05.922933 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:05.922940 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:05.923001 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:05.947567 1685746 cri.go:96] found id: ""
	I1222 01:41:05.947591 1685746 logs.go:282] 0 containers: []
	W1222 01:41:05.947600 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:05.947606 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:05.947725 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:05.973767 1685746 cri.go:96] found id: ""
	I1222 01:41:05.973795 1685746 logs.go:282] 0 containers: []
	W1222 01:41:05.973803 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:05.973810 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:05.973870 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:05.999045 1685746 cri.go:96] found id: ""
	I1222 01:41:05.999075 1685746 logs.go:282] 0 containers: []
	W1222 01:41:05.999084 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:05.999090 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:05.999156 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:06.037292 1685746 cri.go:96] found id: ""
	I1222 01:41:06.037323 1685746 logs.go:282] 0 containers: []
	W1222 01:41:06.037331 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:06.037338 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:06.037403 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:06.063105 1685746 cri.go:96] found id: ""
	I1222 01:41:06.063136 1685746 logs.go:282] 0 containers: []
	W1222 01:41:06.063145 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:06.063155 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:06.063166 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:06.118645 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:06.118682 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:06.134249 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:06.134283 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:06.202948 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:06.194329    5136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:06.194939    5136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:06.196573    5136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:06.197084    5136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:06.198693    5136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:06.194329    5136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:06.194939    5136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:06.196573    5136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:06.197084    5136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:06.198693    5136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:06.202967 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:06.202978 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:06.227736 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:06.227770 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 01:41:04.248851 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:41:06.748841 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:41:08.763766 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:08.776166 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:08.776292 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:08.802744 1685746 cri.go:96] found id: ""
	I1222 01:41:08.802770 1685746 logs.go:282] 0 containers: []
	W1222 01:41:08.802780 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:08.802787 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:08.802897 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:08.829155 1685746 cri.go:96] found id: ""
	I1222 01:41:08.829196 1685746 logs.go:282] 0 containers: []
	W1222 01:41:08.829205 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:08.829212 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:08.829286 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:08.853323 1685746 cri.go:96] found id: ""
	I1222 01:41:08.853358 1685746 logs.go:282] 0 containers: []
	W1222 01:41:08.853368 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:08.853374 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:08.853442 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:08.878843 1685746 cri.go:96] found id: ""
	I1222 01:41:08.878871 1685746 logs.go:282] 0 containers: []
	W1222 01:41:08.878880 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:08.878887 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:08.878948 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:08.907348 1685746 cri.go:96] found id: ""
	I1222 01:41:08.907374 1685746 logs.go:282] 0 containers: []
	W1222 01:41:08.907383 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:08.907390 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:08.907459 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:08.935980 1685746 cri.go:96] found id: ""
	I1222 01:41:08.936006 1685746 logs.go:282] 0 containers: []
	W1222 01:41:08.936015 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:08.936022 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:08.936103 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:08.965110 1685746 cri.go:96] found id: ""
	I1222 01:41:08.965149 1685746 logs.go:282] 0 containers: []
	W1222 01:41:08.965159 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:08.965165 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:08.965240 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:08.991481 1685746 cri.go:96] found id: ""
	I1222 01:41:08.991509 1685746 logs.go:282] 0 containers: []
	W1222 01:41:08.991518 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:08.991527 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:08.991539 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:09.007297 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:09.007330 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:09.077476 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:09.069572    5248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:09.070224    5248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:09.071715    5248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:09.072134    5248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:09.073635    5248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:09.069572    5248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:09.070224    5248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:09.071715    5248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:09.072134    5248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:09.073635    5248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:09.077557 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:09.077597 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:09.102923 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:09.102958 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:41:09.131422 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:09.131450 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1222 01:41:09.248676 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:41:11.748069 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:41:11.686744 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:11.697606 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:11.697689 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:11.722593 1685746 cri.go:96] found id: ""
	I1222 01:41:11.722664 1685746 logs.go:282] 0 containers: []
	W1222 01:41:11.722686 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:11.722701 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:11.722796 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:11.767413 1685746 cri.go:96] found id: ""
	I1222 01:41:11.767439 1685746 logs.go:282] 0 containers: []
	W1222 01:41:11.767448 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:11.767454 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:11.767526 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:11.800344 1685746 cri.go:96] found id: ""
	I1222 01:41:11.800433 1685746 logs.go:282] 0 containers: []
	W1222 01:41:11.800466 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:11.800487 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:11.800594 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:11.836608 1685746 cri.go:96] found id: ""
	I1222 01:41:11.836693 1685746 logs.go:282] 0 containers: []
	W1222 01:41:11.836717 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:11.836755 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:11.836854 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:11.862781 1685746 cri.go:96] found id: ""
	I1222 01:41:11.862808 1685746 logs.go:282] 0 containers: []
	W1222 01:41:11.862818 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:11.862830 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:11.862894 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:11.891376 1685746 cri.go:96] found id: ""
	I1222 01:41:11.891401 1685746 logs.go:282] 0 containers: []
	W1222 01:41:11.891410 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:11.891416 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:11.891480 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:11.920553 1685746 cri.go:96] found id: ""
	I1222 01:41:11.920581 1685746 logs.go:282] 0 containers: []
	W1222 01:41:11.920590 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:11.920596 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:11.920657 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:11.948610 1685746 cri.go:96] found id: ""
	I1222 01:41:11.948634 1685746 logs.go:282] 0 containers: []
	W1222 01:41:11.948642 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:11.948651 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:11.948662 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:12.006298 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:12.006340 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:12.022860 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:12.022889 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:12.087185 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:12.078758    5361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:12.079176    5361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:12.080965    5361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:12.081458    5361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:12.083372    5361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:12.078758    5361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:12.079176    5361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:12.080965    5361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:12.081458    5361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:12.083372    5361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:12.087252 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:12.087282 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:12.112381 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:12.112415 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:41:14.645175 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:14.655581 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:14.655655 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:14.683086 1685746 cri.go:96] found id: ""
	I1222 01:41:14.683110 1685746 logs.go:282] 0 containers: []
	W1222 01:41:14.683118 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:14.683125 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:14.683192 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:14.708684 1685746 cri.go:96] found id: ""
	I1222 01:41:14.708707 1685746 logs.go:282] 0 containers: []
	W1222 01:41:14.708716 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:14.708723 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:14.708783 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:14.733550 1685746 cri.go:96] found id: ""
	I1222 01:41:14.733572 1685746 logs.go:282] 0 containers: []
	W1222 01:41:14.733580 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:14.733586 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:14.733653 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:14.762029 1685746 cri.go:96] found id: ""
	I1222 01:41:14.762052 1685746 logs.go:282] 0 containers: []
	W1222 01:41:14.762061 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:14.762068 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:14.762191 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:14.802569 1685746 cri.go:96] found id: ""
	I1222 01:41:14.802593 1685746 logs.go:282] 0 containers: []
	W1222 01:41:14.802602 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:14.802609 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:14.802668 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:14.829402 1685746 cri.go:96] found id: ""
	I1222 01:41:14.829425 1685746 logs.go:282] 0 containers: []
	W1222 01:41:14.829434 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:14.829440 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:14.829499 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:14.854254 1685746 cri.go:96] found id: ""
	I1222 01:41:14.854276 1685746 logs.go:282] 0 containers: []
	W1222 01:41:14.854285 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:14.854291 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:14.854350 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:14.879183 1685746 cri.go:96] found id: ""
	I1222 01:41:14.879205 1685746 logs.go:282] 0 containers: []
	W1222 01:41:14.879213 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:14.879222 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:14.879239 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:14.933758 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:14.933795 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:14.948809 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:14.948834 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:15.022478 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:15.005575    5474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:15.006312    5474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:15.008255    5474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:15.009052    5474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:15.011873    5474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:15.005575    5474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:15.006312    5474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:15.008255    5474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:15.009052    5474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:15.011873    5474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:15.022594 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:15.022610 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:15.071291 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:15.071336 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 01:41:14.248149 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:41:16.748036 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:41:17.608065 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:17.618810 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:17.618881 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:17.643606 1685746 cri.go:96] found id: ""
	I1222 01:41:17.643633 1685746 logs.go:282] 0 containers: []
	W1222 01:41:17.643643 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:17.643650 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:17.643760 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:17.669609 1685746 cri.go:96] found id: ""
	I1222 01:41:17.669639 1685746 logs.go:282] 0 containers: []
	W1222 01:41:17.669649 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:17.669656 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:17.669725 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:17.694910 1685746 cri.go:96] found id: ""
	I1222 01:41:17.694934 1685746 logs.go:282] 0 containers: []
	W1222 01:41:17.694943 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:17.694950 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:17.695009 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:17.721067 1685746 cri.go:96] found id: ""
	I1222 01:41:17.721101 1685746 logs.go:282] 0 containers: []
	W1222 01:41:17.721111 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:17.721118 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:17.721251 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:17.762594 1685746 cri.go:96] found id: ""
	I1222 01:41:17.762669 1685746 logs.go:282] 0 containers: []
	W1222 01:41:17.762691 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:17.762715 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:17.762802 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:17.806835 1685746 cri.go:96] found id: ""
	I1222 01:41:17.806870 1685746 logs.go:282] 0 containers: []
	W1222 01:41:17.806880 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:17.806887 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:17.806964 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:17.837236 1685746 cri.go:96] found id: ""
	I1222 01:41:17.837273 1685746 logs.go:282] 0 containers: []
	W1222 01:41:17.837284 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:17.837291 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:17.837362 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:17.867730 1685746 cri.go:96] found id: ""
	I1222 01:41:17.867802 1685746 logs.go:282] 0 containers: []
	W1222 01:41:17.867825 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:17.867840 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:17.867852 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:17.927517 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:17.927555 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:17.943454 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:17.943484 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:18.012436 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:18.001362    5586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:18.002881    5586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:18.003527    5586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:18.005497    5586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:18.006107    5586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:18.001362    5586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:18.002881    5586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:18.003527    5586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:18.005497    5586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:18.006107    5586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:18.012522 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:18.012553 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:18.040219 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:18.040262 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:41:20.572279 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:20.583193 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:20.583266 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:20.609051 1685746 cri.go:96] found id: ""
	I1222 01:41:20.609075 1685746 logs.go:282] 0 containers: []
	W1222 01:41:20.609083 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:20.609089 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:20.609150 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:20.635365 1685746 cri.go:96] found id: ""
	I1222 01:41:20.635391 1685746 logs.go:282] 0 containers: []
	W1222 01:41:20.635400 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:20.635406 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:20.635470 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:20.664505 1685746 cri.go:96] found id: ""
	I1222 01:41:20.664532 1685746 logs.go:282] 0 containers: []
	W1222 01:41:20.664541 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:20.664547 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:20.664609 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:20.690863 1685746 cri.go:96] found id: ""
	I1222 01:41:20.690887 1685746 logs.go:282] 0 containers: []
	W1222 01:41:20.690904 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:20.690916 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:20.690981 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:20.716167 1685746 cri.go:96] found id: ""
	I1222 01:41:20.716188 1685746 logs.go:282] 0 containers: []
	W1222 01:41:20.716196 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:20.716203 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:20.716262 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:20.758512 1685746 cri.go:96] found id: ""
	I1222 01:41:20.758538 1685746 logs.go:282] 0 containers: []
	W1222 01:41:20.758547 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:20.758554 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:20.758612 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:20.789839 1685746 cri.go:96] found id: ""
	I1222 01:41:20.789866 1685746 logs.go:282] 0 containers: []
	W1222 01:41:20.789875 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:20.789882 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:20.789944 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:20.823216 1685746 cri.go:96] found id: ""
	I1222 01:41:20.823244 1685746 logs.go:282] 0 containers: []
	W1222 01:41:20.823254 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:20.823263 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:20.823275 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:20.878834 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:20.878873 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:20.894375 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:20.894409 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:20.963456 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:20.954882    5700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:20.955717    5700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:20.957444    5700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:20.957756    5700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:20.959270    5700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:20.954882    5700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:20.955717    5700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:20.957444    5700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:20.957756    5700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:20.959270    5700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:20.963479 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:20.963518 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:20.992875 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:20.992916 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 01:41:18.748733 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:41:21.248234 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:41:23.526237 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:23.540126 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:23.540244 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:23.567806 1685746 cri.go:96] found id: ""
	I1222 01:41:23.567833 1685746 logs.go:282] 0 containers: []
	W1222 01:41:23.567842 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:23.567849 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:23.567915 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:23.594496 1685746 cri.go:96] found id: ""
	I1222 01:41:23.594525 1685746 logs.go:282] 0 containers: []
	W1222 01:41:23.594538 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:23.594546 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:23.594614 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:23.621007 1685746 cri.go:96] found id: ""
	I1222 01:41:23.621034 1685746 logs.go:282] 0 containers: []
	W1222 01:41:23.621043 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:23.621050 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:23.621111 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:23.646829 1685746 cri.go:96] found id: ""
	I1222 01:41:23.646857 1685746 logs.go:282] 0 containers: []
	W1222 01:41:23.646867 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:23.646874 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:23.646941 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:23.672993 1685746 cri.go:96] found id: ""
	I1222 01:41:23.673020 1685746 logs.go:282] 0 containers: []
	W1222 01:41:23.673030 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:23.673036 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:23.673099 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:23.704873 1685746 cri.go:96] found id: ""
	I1222 01:41:23.704901 1685746 logs.go:282] 0 containers: []
	W1222 01:41:23.704910 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:23.704916 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:23.704980 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:23.731220 1685746 cri.go:96] found id: ""
	I1222 01:41:23.731248 1685746 logs.go:282] 0 containers: []
	W1222 01:41:23.731259 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:23.731265 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:23.731330 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:23.769641 1685746 cri.go:96] found id: ""
	I1222 01:41:23.769669 1685746 logs.go:282] 0 containers: []
	W1222 01:41:23.769678 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:23.769687 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:23.769701 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:41:23.811900 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:23.811928 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:23.870851 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:23.870887 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:23.886411 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:23.886488 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:23.954566 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:23.945665    5826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:23.946254    5826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:23.948052    5826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:23.948665    5826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:23.950507    5826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:23.945665    5826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:23.946254    5826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:23.948052    5826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:23.948665    5826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:23.950507    5826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:23.954588 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:23.954602 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:26.483766 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:26.495024 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:26.495100 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:26.521679 1685746 cri.go:96] found id: ""
	I1222 01:41:26.521706 1685746 logs.go:282] 0 containers: []
	W1222 01:41:26.521716 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:26.521723 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:26.521786 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:26.552746 1685746 cri.go:96] found id: ""
	I1222 01:41:26.552773 1685746 logs.go:282] 0 containers: []
	W1222 01:41:26.552782 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:26.552789 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:26.552856 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:26.580045 1685746 cri.go:96] found id: ""
	I1222 01:41:26.580072 1685746 logs.go:282] 0 containers: []
	W1222 01:41:26.580082 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:26.580088 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:26.580151 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:26.606656 1685746 cri.go:96] found id: ""
	I1222 01:41:26.606683 1685746 logs.go:282] 0 containers: []
	W1222 01:41:26.606693 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:26.606700 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:26.606759 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:26.632499 1685746 cri.go:96] found id: ""
	I1222 01:41:26.632539 1685746 logs.go:282] 0 containers: []
	W1222 01:41:26.632548 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:26.632556 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:26.632640 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:26.664045 1685746 cri.go:96] found id: ""
	I1222 01:41:26.664072 1685746 logs.go:282] 0 containers: []
	W1222 01:41:26.664082 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:26.664089 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:26.664172 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	W1222 01:41:23.248384 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:41:25.748529 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:41:27.748967 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:41:26.689648 1685746 cri.go:96] found id: ""
	I1222 01:41:26.689672 1685746 logs.go:282] 0 containers: []
	W1222 01:41:26.689693 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:26.689704 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:26.689772 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:26.715926 1685746 cri.go:96] found id: ""
	I1222 01:41:26.715949 1685746 logs.go:282] 0 containers: []
	W1222 01:41:26.715958 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:26.715966 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:26.715977 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:26.779696 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:26.779785 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:26.802335 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:26.802412 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:26.866575 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:26.857663    5928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:26.858479    5928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:26.860256    5928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:26.860898    5928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:26.862675    5928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:26.857663    5928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:26.858479    5928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:26.860256    5928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:26.860898    5928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:26.862675    5928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:26.866599 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:26.866613 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:26.893136 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:26.893176 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:41:29.425895 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:29.438488 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:29.438569 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:29.467384 1685746 cri.go:96] found id: ""
	I1222 01:41:29.467415 1685746 logs.go:282] 0 containers: []
	W1222 01:41:29.467426 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:29.467432 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:29.467497 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:29.502253 1685746 cri.go:96] found id: ""
	I1222 01:41:29.502277 1685746 logs.go:282] 0 containers: []
	W1222 01:41:29.502285 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:29.502291 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:29.502351 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:29.538703 1685746 cri.go:96] found id: ""
	I1222 01:41:29.538730 1685746 logs.go:282] 0 containers: []
	W1222 01:41:29.538739 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:29.538747 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:29.538809 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:29.567395 1685746 cri.go:96] found id: ""
	I1222 01:41:29.567422 1685746 logs.go:282] 0 containers: []
	W1222 01:41:29.567431 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:29.567439 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:29.567500 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:29.595415 1685746 cri.go:96] found id: ""
	I1222 01:41:29.595493 1685746 logs.go:282] 0 containers: []
	W1222 01:41:29.595508 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:29.595516 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:29.595583 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:29.622583 1685746 cri.go:96] found id: ""
	I1222 01:41:29.622611 1685746 logs.go:282] 0 containers: []
	W1222 01:41:29.622620 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:29.622627 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:29.622693 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:29.649130 1685746 cri.go:96] found id: ""
	I1222 01:41:29.649156 1685746 logs.go:282] 0 containers: []
	W1222 01:41:29.649166 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:29.649173 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:29.649240 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:29.676205 1685746 cri.go:96] found id: ""
	I1222 01:41:29.676231 1685746 logs.go:282] 0 containers: []
	W1222 01:41:29.676240 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:29.676250 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:29.676279 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:29.731980 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:29.732016 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:29.747474 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:29.747503 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:29.833319 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:29.822964    6042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:29.824836    6042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:29.825723    6042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:29.827546    6042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:29.829272    6042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:29.822964    6042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:29.824836    6042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:29.825723    6042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:29.827546    6042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:29.829272    6042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:29.833342 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:29.833355 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:29.859398 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:29.859432 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 01:41:30.247999 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:41:32.248426 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:41:32.387755 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:32.398548 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:32.398639 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:32.422848 1685746 cri.go:96] found id: ""
	I1222 01:41:32.422870 1685746 logs.go:282] 0 containers: []
	W1222 01:41:32.422879 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:32.422885 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:32.422976 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:32.448126 1685746 cri.go:96] found id: ""
	I1222 01:41:32.448153 1685746 logs.go:282] 0 containers: []
	W1222 01:41:32.448162 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:32.448171 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:32.448233 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:32.476732 1685746 cri.go:96] found id: ""
	I1222 01:41:32.476769 1685746 logs.go:282] 0 containers: []
	W1222 01:41:32.476779 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:32.476785 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:32.476856 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:32.521856 1685746 cri.go:96] found id: ""
	I1222 01:41:32.521885 1685746 logs.go:282] 0 containers: []
	W1222 01:41:32.521915 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:32.521923 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:32.522010 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:32.559083 1685746 cri.go:96] found id: ""
	I1222 01:41:32.559112 1685746 logs.go:282] 0 containers: []
	W1222 01:41:32.559121 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:32.559128 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:32.559199 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:32.585037 1685746 cri.go:96] found id: ""
	I1222 01:41:32.585066 1685746 logs.go:282] 0 containers: []
	W1222 01:41:32.585076 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:32.585082 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:32.585142 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:32.611094 1685746 cri.go:96] found id: ""
	I1222 01:41:32.611117 1685746 logs.go:282] 0 containers: []
	W1222 01:41:32.611126 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:32.611132 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:32.611200 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:32.636572 1685746 cri.go:96] found id: ""
	I1222 01:41:32.636598 1685746 logs.go:282] 0 containers: []
	W1222 01:41:32.636606 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:32.636614 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:32.636626 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:32.691721 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:32.691756 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:32.706757 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:32.706791 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:32.784203 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:32.775781    6150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:32.776686    6150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:32.778481    6150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:32.778815    6150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:32.780276    6150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:32.775781    6150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:32.776686    6150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:32.778481    6150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:32.778815    6150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:32.780276    6150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:32.784277 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:32.784302 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:32.812067 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:32.812099 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:41:35.344181 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:35.354549 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:35.354621 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:35.378138 1685746 cri.go:96] found id: ""
	I1222 01:41:35.378160 1685746 logs.go:282] 0 containers: []
	W1222 01:41:35.378169 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:35.378177 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:35.378236 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:35.403725 1685746 cri.go:96] found id: ""
	I1222 01:41:35.403748 1685746 logs.go:282] 0 containers: []
	W1222 01:41:35.403757 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:35.403764 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:35.403825 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:35.429025 1685746 cri.go:96] found id: ""
	I1222 01:41:35.429050 1685746 logs.go:282] 0 containers: []
	W1222 01:41:35.429059 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:35.429066 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:35.429129 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:35.459607 1685746 cri.go:96] found id: ""
	I1222 01:41:35.459633 1685746 logs.go:282] 0 containers: []
	W1222 01:41:35.459642 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:35.459649 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:35.459707 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:35.483992 1685746 cri.go:96] found id: ""
	I1222 01:41:35.484015 1685746 logs.go:282] 0 containers: []
	W1222 01:41:35.484024 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:35.484031 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:35.484094 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:35.517254 1685746 cri.go:96] found id: ""
	I1222 01:41:35.517277 1685746 logs.go:282] 0 containers: []
	W1222 01:41:35.517286 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:35.517293 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:35.517353 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:35.546137 1685746 cri.go:96] found id: ""
	I1222 01:41:35.546219 1685746 logs.go:282] 0 containers: []
	W1222 01:41:35.546242 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:35.546284 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:35.546378 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:35.576307 1685746 cri.go:96] found id: ""
	I1222 01:41:35.576329 1685746 logs.go:282] 0 containers: []
	W1222 01:41:35.576338 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:35.576347 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:35.576358 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:35.631853 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:35.631887 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:35.646787 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:35.646827 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:35.713895 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:35.705676    6260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:35.706509    6260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:35.708048    6260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:35.708614    6260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:35.710294    6260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:35.705676    6260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:35.706509    6260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:35.708048    6260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:35.708614    6260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:35.710294    6260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:35.713927 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:35.713943 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:35.739168 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:35.739250 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 01:41:34.248875 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:41:36.748177 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:41:38.278358 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:38.289460 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:38.289534 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:38.316292 1685746 cri.go:96] found id: ""
	I1222 01:41:38.316320 1685746 logs.go:282] 0 containers: []
	W1222 01:41:38.316329 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:38.316336 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:38.316416 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:38.344932 1685746 cri.go:96] found id: ""
	I1222 01:41:38.344960 1685746 logs.go:282] 0 containers: []
	W1222 01:41:38.344969 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:38.344976 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:38.345038 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:38.371484 1685746 cri.go:96] found id: ""
	I1222 01:41:38.371509 1685746 logs.go:282] 0 containers: []
	W1222 01:41:38.371519 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:38.371525 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:38.371594 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:38.401114 1685746 cri.go:96] found id: ""
	I1222 01:41:38.401140 1685746 logs.go:282] 0 containers: []
	W1222 01:41:38.401149 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:38.401157 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:38.401217 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:38.427857 1685746 cri.go:96] found id: ""
	I1222 01:41:38.427881 1685746 logs.go:282] 0 containers: []
	W1222 01:41:38.427890 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:38.427897 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:38.427962 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:38.453333 1685746 cri.go:96] found id: ""
	I1222 01:41:38.453358 1685746 logs.go:282] 0 containers: []
	W1222 01:41:38.453367 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:38.453374 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:38.453455 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:38.477527 1685746 cri.go:96] found id: ""
	I1222 01:41:38.477610 1685746 logs.go:282] 0 containers: []
	W1222 01:41:38.477633 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:38.477655 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:38.477748 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:38.523741 1685746 cri.go:96] found id: ""
	I1222 01:41:38.523763 1685746 logs.go:282] 0 containers: []
	W1222 01:41:38.523772 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:38.523787 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:38.523798 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:38.595469 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:38.587236    6363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:38.587861    6363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:38.589425    6363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:38.589886    6363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:38.591514    6363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:38.587236    6363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:38.587861    6363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:38.589425    6363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:38.589886    6363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:38.591514    6363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:38.595491 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:38.595508 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:38.621769 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:38.621808 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:41:38.651477 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:38.651507 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:38.710896 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:38.710934 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:41.227040 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:41.237881 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:41.237954 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:41.265636 1685746 cri.go:96] found id: ""
	I1222 01:41:41.265671 1685746 logs.go:282] 0 containers: []
	W1222 01:41:41.265680 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:41.265687 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:41.265757 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:41.291304 1685746 cri.go:96] found id: ""
	I1222 01:41:41.291330 1685746 logs.go:282] 0 containers: []
	W1222 01:41:41.291339 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:41.291346 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:41.291414 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:41.316968 1685746 cri.go:96] found id: ""
	I1222 01:41:41.317003 1685746 logs.go:282] 0 containers: []
	W1222 01:41:41.317013 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:41.317020 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:41.317094 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:41.342750 1685746 cri.go:96] found id: ""
	I1222 01:41:41.342779 1685746 logs.go:282] 0 containers: []
	W1222 01:41:41.342794 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:41.342801 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:41.342865 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:41.368173 1685746 cri.go:96] found id: ""
	I1222 01:41:41.368197 1685746 logs.go:282] 0 containers: []
	W1222 01:41:41.368205 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:41.368212 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:41.368275 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:41.396263 1685746 cri.go:96] found id: ""
	I1222 01:41:41.396290 1685746 logs.go:282] 0 containers: []
	W1222 01:41:41.396300 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:41.396308 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:41.396380 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:41.424002 1685746 cri.go:96] found id: ""
	I1222 01:41:41.424028 1685746 logs.go:282] 0 containers: []
	W1222 01:41:41.424037 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:41.424044 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:41.424104 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:41.450858 1685746 cri.go:96] found id: ""
	I1222 01:41:41.450886 1685746 logs.go:282] 0 containers: []
	W1222 01:41:41.450894 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:41.450904 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:41.450915 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:41.510703 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:41.510785 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:41.529398 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:41.529475 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:41.596968 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:41.588901    6483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:41.589540    6483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:41.591264    6483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:41.591828    6483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:41.593389    6483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:41.588901    6483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:41.589540    6483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:41.591264    6483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:41.591828    6483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:41.593389    6483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:41.596989 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:41.597002 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:41.623436 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:41.623472 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 01:41:39.248106 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:41:41.748067 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:41:44.153585 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:44.164792 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:44.164865 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:44.190259 1685746 cri.go:96] found id: ""
	I1222 01:41:44.190282 1685746 logs.go:282] 0 containers: []
	W1222 01:41:44.190290 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:44.190297 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:44.190357 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:44.223886 1685746 cri.go:96] found id: ""
	I1222 01:41:44.223911 1685746 logs.go:282] 0 containers: []
	W1222 01:41:44.223922 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:44.223929 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:44.223988 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:44.249898 1685746 cri.go:96] found id: ""
	I1222 01:41:44.249922 1685746 logs.go:282] 0 containers: []
	W1222 01:41:44.249931 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:44.249948 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:44.250010 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:44.275190 1685746 cri.go:96] found id: ""
	I1222 01:41:44.275217 1685746 logs.go:282] 0 containers: []
	W1222 01:41:44.275227 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:44.275233 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:44.275325 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:44.301198 1685746 cri.go:96] found id: ""
	I1222 01:41:44.301221 1685746 logs.go:282] 0 containers: []
	W1222 01:41:44.301230 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:44.301237 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:44.301311 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:44.325952 1685746 cri.go:96] found id: ""
	I1222 01:41:44.325990 1685746 logs.go:282] 0 containers: []
	W1222 01:41:44.326000 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:44.326023 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:44.326154 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:44.352189 1685746 cri.go:96] found id: ""
	I1222 01:41:44.352227 1685746 logs.go:282] 0 containers: []
	W1222 01:41:44.352236 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:44.352259 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:44.352334 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:44.377820 1685746 cri.go:96] found id: ""
	I1222 01:41:44.377848 1685746 logs.go:282] 0 containers: []
	W1222 01:41:44.377858 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:44.377868 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:44.377879 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:44.393230 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:44.393258 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:44.463151 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:44.454750    6594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:44.455259    6594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:44.456851    6594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:44.457499    6594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:44.458487    6594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:44.454750    6594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:44.455259    6594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:44.456851    6594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:44.457499    6594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:44.458487    6594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:44.463175 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:44.463188 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:44.488611 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:44.488690 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:41:44.523935 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:44.524011 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1222 01:41:44.248599 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:41:46.748094 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:41:47.091277 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:47.102299 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:47.102374 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:47.128309 1685746 cri.go:96] found id: ""
	I1222 01:41:47.128334 1685746 logs.go:282] 0 containers: []
	W1222 01:41:47.128344 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:47.128351 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:47.128431 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:47.154429 1685746 cri.go:96] found id: ""
	I1222 01:41:47.154456 1685746 logs.go:282] 0 containers: []
	W1222 01:41:47.154465 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:47.154473 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:47.154535 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:47.179829 1685746 cri.go:96] found id: ""
	I1222 01:41:47.179856 1685746 logs.go:282] 0 containers: []
	W1222 01:41:47.179865 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:47.179872 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:47.179933 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:47.204965 1685746 cri.go:96] found id: ""
	I1222 01:41:47.204999 1685746 logs.go:282] 0 containers: []
	W1222 01:41:47.205009 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:47.205016 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:47.205088 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:47.231912 1685746 cri.go:96] found id: ""
	I1222 01:41:47.231939 1685746 logs.go:282] 0 containers: []
	W1222 01:41:47.231949 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:47.231955 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:47.232043 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:47.262187 1685746 cri.go:96] found id: ""
	I1222 01:41:47.262215 1685746 logs.go:282] 0 containers: []
	W1222 01:41:47.262230 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:47.262237 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:47.262301 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:47.287536 1685746 cri.go:96] found id: ""
	I1222 01:41:47.287567 1685746 logs.go:282] 0 containers: []
	W1222 01:41:47.287577 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:47.287583 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:47.287648 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:47.313516 1685746 cri.go:96] found id: ""
	I1222 01:41:47.313544 1685746 logs.go:282] 0 containers: []
	W1222 01:41:47.313553 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:47.313563 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:47.313573 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:47.369295 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:47.369329 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:47.387169 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:47.387197 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:47.455311 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:47.445096    6709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:47.445837    6709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:47.447629    6709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:47.448117    6709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:47.449732    6709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:47.445096    6709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:47.445837    6709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:47.447629    6709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:47.448117    6709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:47.449732    6709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:47.455335 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:47.455347 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:47.481041 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:47.481078 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:41:50.030868 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:50.043616 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:50.043692 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:50.072180 1685746 cri.go:96] found id: ""
	I1222 01:41:50.072210 1685746 logs.go:282] 0 containers: []
	W1222 01:41:50.072220 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:50.072229 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:50.072297 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:50.100979 1685746 cri.go:96] found id: ""
	I1222 01:41:50.101005 1685746 logs.go:282] 0 containers: []
	W1222 01:41:50.101014 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:50.101021 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:50.101091 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:50.128360 1685746 cri.go:96] found id: ""
	I1222 01:41:50.128392 1685746 logs.go:282] 0 containers: []
	W1222 01:41:50.128404 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:50.128411 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:50.128476 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:50.154912 1685746 cri.go:96] found id: ""
	I1222 01:41:50.154945 1685746 logs.go:282] 0 containers: []
	W1222 01:41:50.154955 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:50.154963 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:50.155033 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:50.181433 1685746 cri.go:96] found id: ""
	I1222 01:41:50.181465 1685746 logs.go:282] 0 containers: []
	W1222 01:41:50.181474 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:50.181483 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:50.181553 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:50.207260 1685746 cri.go:96] found id: ""
	I1222 01:41:50.207289 1685746 logs.go:282] 0 containers: []
	W1222 01:41:50.207299 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:50.207305 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:50.207366 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:50.234601 1685746 cri.go:96] found id: ""
	I1222 01:41:50.234649 1685746 logs.go:282] 0 containers: []
	W1222 01:41:50.234659 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:50.234666 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:50.234744 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:50.264579 1685746 cri.go:96] found id: ""
	I1222 01:41:50.264621 1685746 logs.go:282] 0 containers: []
	W1222 01:41:50.264631 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:50.264641 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:50.264661 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:50.321078 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:50.321112 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:50.336044 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:50.336069 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:50.401373 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:50.392422    6821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:50.393059    6821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:50.394644    6821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:50.395244    6821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:50.396998    6821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:50.392422    6821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:50.393059    6821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:50.394644    6821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:50.395244    6821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:50.396998    6821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:50.401396 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:50.401410 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:50.428108 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:50.428151 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 01:41:48.749155 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:41:51.248977 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:41:52.958393 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:52.969793 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:52.969867 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:53.021307 1685746 cri.go:96] found id: ""
	I1222 01:41:53.021331 1685746 logs.go:282] 0 containers: []
	W1222 01:41:53.021340 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:53.021352 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:53.021415 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:53.053765 1685746 cri.go:96] found id: ""
	I1222 01:41:53.053789 1685746 logs.go:282] 0 containers: []
	W1222 01:41:53.053798 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:53.053804 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:53.053872 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:53.079107 1685746 cri.go:96] found id: ""
	I1222 01:41:53.079135 1685746 logs.go:282] 0 containers: []
	W1222 01:41:53.079144 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:53.079152 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:53.079214 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:53.106101 1685746 cri.go:96] found id: ""
	I1222 01:41:53.106130 1685746 logs.go:282] 0 containers: []
	W1222 01:41:53.106138 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:53.106145 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:53.106209 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:53.135616 1685746 cri.go:96] found id: ""
	I1222 01:41:53.135643 1685746 logs.go:282] 0 containers: []
	W1222 01:41:53.135652 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:53.135659 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:53.135766 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:53.160318 1685746 cri.go:96] found id: ""
	I1222 01:41:53.160344 1685746 logs.go:282] 0 containers: []
	W1222 01:41:53.160353 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:53.160360 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:53.160451 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:53.185257 1685746 cri.go:96] found id: ""
	I1222 01:41:53.185297 1685746 logs.go:282] 0 containers: []
	W1222 01:41:53.185306 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:53.185313 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:53.185401 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:53.210753 1685746 cri.go:96] found id: ""
	I1222 01:41:53.210824 1685746 logs.go:282] 0 containers: []
	W1222 01:41:53.210839 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:53.210855 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:53.210867 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:53.237290 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:53.237323 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:41:53.267342 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:53.267374 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:53.323394 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:53.323429 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:53.339435 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:53.339465 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:53.403286 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:53.395297    6950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:53.395794    6950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:53.397429    6950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:53.397875    6950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:53.399414    6950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:53.395297    6950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:53.395794    6950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:53.397429    6950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:53.397875    6950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:53.399414    6950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:55.903619 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:55.914760 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:55.914836 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:55.939507 1685746 cri.go:96] found id: ""
	I1222 01:41:55.939532 1685746 logs.go:282] 0 containers: []
	W1222 01:41:55.939541 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:55.939548 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:55.939614 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:55.965607 1685746 cri.go:96] found id: ""
	I1222 01:41:55.965633 1685746 logs.go:282] 0 containers: []
	W1222 01:41:55.965643 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:55.965649 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:55.965715 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:56.006138 1685746 cri.go:96] found id: ""
	I1222 01:41:56.006171 1685746 logs.go:282] 0 containers: []
	W1222 01:41:56.006181 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:56.006188 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:56.006256 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:56.040087 1685746 cri.go:96] found id: ""
	I1222 01:41:56.040116 1685746 logs.go:282] 0 containers: []
	W1222 01:41:56.040125 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:56.040131 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:56.040191 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:56.068695 1685746 cri.go:96] found id: ""
	I1222 01:41:56.068719 1685746 logs.go:282] 0 containers: []
	W1222 01:41:56.068727 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:56.068734 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:56.068795 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:56.096726 1685746 cri.go:96] found id: ""
	I1222 01:41:56.096808 1685746 logs.go:282] 0 containers: []
	W1222 01:41:56.096832 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:56.096854 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:56.096963 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:56.125548 1685746 cri.go:96] found id: ""
	I1222 01:41:56.125627 1685746 logs.go:282] 0 containers: []
	W1222 01:41:56.125652 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:56.125675 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:56.125763 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:56.150956 1685746 cri.go:96] found id: ""
	I1222 01:41:56.150986 1685746 logs.go:282] 0 containers: []
	W1222 01:41:56.150995 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:56.151005 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:56.151049 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:56.216560 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:56.208477    7045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:56.209149    7045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:56.210844    7045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:56.211410    7045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:56.212923    7045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:56.208477    7045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:56.209149    7045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:56.210844    7045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:56.211410    7045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:56.212923    7045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:56.216581 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:56.216594 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:56.242334 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:56.242368 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:41:56.270763 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:56.270793 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:56.325996 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:56.326038 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1222 01:41:53.748987 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:41:56.248859 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:41:58.841618 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:58.852321 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:58.852411 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:58.877439 1685746 cri.go:96] found id: ""
	I1222 01:41:58.877465 1685746 logs.go:282] 0 containers: []
	W1222 01:41:58.877475 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:58.877482 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:58.877542 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:58.902343 1685746 cri.go:96] found id: ""
	I1222 01:41:58.902369 1685746 logs.go:282] 0 containers: []
	W1222 01:41:58.902378 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:58.902385 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:58.902443 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:58.927733 1685746 cri.go:96] found id: ""
	I1222 01:41:58.927758 1685746 logs.go:282] 0 containers: []
	W1222 01:41:58.927767 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:58.927774 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:58.927834 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:58.954349 1685746 cri.go:96] found id: ""
	I1222 01:41:58.954374 1685746 logs.go:282] 0 containers: []
	W1222 01:41:58.954384 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:58.954391 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:58.954464 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:58.984449 1685746 cri.go:96] found id: ""
	I1222 01:41:58.984519 1685746 logs.go:282] 0 containers: []
	W1222 01:41:58.984533 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:58.984541 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:58.984612 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:59.020245 1685746 cri.go:96] found id: ""
	I1222 01:41:59.020277 1685746 logs.go:282] 0 containers: []
	W1222 01:41:59.020294 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:59.020303 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:59.020387 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:59.059067 1685746 cri.go:96] found id: ""
	I1222 01:41:59.059135 1685746 logs.go:282] 0 containers: []
	W1222 01:41:59.059157 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:59.059170 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:59.059244 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:59.090327 1685746 cri.go:96] found id: ""
	I1222 01:41:59.090355 1685746 logs.go:282] 0 containers: []
	W1222 01:41:59.090364 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:59.090372 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:59.090384 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:59.149768 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:59.149809 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:59.164825 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:59.164857 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:59.232698 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:59.223745    7163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:59.224279    7163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:59.226992    7163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:59.227434    7163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:59.228967    7163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:59.223745    7163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:59.224279    7163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:59.226992    7163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:59.227434    7163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:59.228967    7163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:59.232720 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:59.232734 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:59.258805 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:59.258840 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 01:41:58.748026 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:42:00.748292 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:42:01.787611 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:42:01.799088 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:42:01.799206 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:42:01.829442 1685746 cri.go:96] found id: ""
	I1222 01:42:01.829521 1685746 logs.go:282] 0 containers: []
	W1222 01:42:01.829543 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:42:01.829566 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:42:01.829657 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:42:01.856095 1685746 cri.go:96] found id: ""
	I1222 01:42:01.856122 1685746 logs.go:282] 0 containers: []
	W1222 01:42:01.856132 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:42:01.856139 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:42:01.856203 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:42:01.882443 1685746 cri.go:96] found id: ""
	I1222 01:42:01.882469 1685746 logs.go:282] 0 containers: []
	W1222 01:42:01.882478 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:42:01.882485 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:42:01.882549 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:42:01.908008 1685746 cri.go:96] found id: ""
	I1222 01:42:01.908033 1685746 logs.go:282] 0 containers: []
	W1222 01:42:01.908043 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:42:01.908049 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:42:01.908111 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:42:01.934350 1685746 cri.go:96] found id: ""
	I1222 01:42:01.934377 1685746 logs.go:282] 0 containers: []
	W1222 01:42:01.934386 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:42:01.934393 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:42:01.934457 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:42:01.960407 1685746 cri.go:96] found id: ""
	I1222 01:42:01.960433 1685746 logs.go:282] 0 containers: []
	W1222 01:42:01.960442 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:42:01.960449 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:42:01.960512 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:42:01.988879 1685746 cri.go:96] found id: ""
	I1222 01:42:01.988915 1685746 logs.go:282] 0 containers: []
	W1222 01:42:01.988925 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:42:01.988931 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:42:01.989000 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:42:02.021404 1685746 cri.go:96] found id: ""
	I1222 01:42:02.021444 1685746 logs.go:282] 0 containers: []
	W1222 01:42:02.021454 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:42:02.021464 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:42:02.021476 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:42:02.053252 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:42:02.053282 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:42:02.111509 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:42:02.111548 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:42:02.127002 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:42:02.127081 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:42:02.196408 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:42:02.188126    7282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:02.188871    7282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:02.190476    7282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:02.190840    7282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:02.192032    7282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:42:02.188126    7282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:02.188871    7282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:02.190476    7282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:02.190840    7282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:02.192032    7282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:42:02.196429 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:42:02.196442 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:42:04.723107 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:42:04.734699 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:42:04.734786 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:42:04.771439 1685746 cri.go:96] found id: ""
	I1222 01:42:04.771462 1685746 logs.go:282] 0 containers: []
	W1222 01:42:04.771471 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:42:04.771477 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:42:04.771540 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:42:04.806612 1685746 cri.go:96] found id: ""
	I1222 01:42:04.806639 1685746 logs.go:282] 0 containers: []
	W1222 01:42:04.806648 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:42:04.806655 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:42:04.806714 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:42:04.832290 1685746 cri.go:96] found id: ""
	I1222 01:42:04.832320 1685746 logs.go:282] 0 containers: []
	W1222 01:42:04.832329 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:42:04.832336 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:42:04.832404 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:42:04.860422 1685746 cri.go:96] found id: ""
	I1222 01:42:04.860460 1685746 logs.go:282] 0 containers: []
	W1222 01:42:04.860469 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:42:04.860494 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:42:04.860603 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:42:04.885397 1685746 cri.go:96] found id: ""
	I1222 01:42:04.885424 1685746 logs.go:282] 0 containers: []
	W1222 01:42:04.885433 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:42:04.885440 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:42:04.885524 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:42:04.910499 1685746 cri.go:96] found id: ""
	I1222 01:42:04.910529 1685746 logs.go:282] 0 containers: []
	W1222 01:42:04.910539 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:42:04.910546 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:42:04.910607 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:42:04.934849 1685746 cri.go:96] found id: ""
	I1222 01:42:04.934887 1685746 logs.go:282] 0 containers: []
	W1222 01:42:04.934897 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:42:04.934921 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:42:04.935013 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:42:04.964384 1685746 cri.go:96] found id: ""
	I1222 01:42:04.964411 1685746 logs.go:282] 0 containers: []
	W1222 01:42:04.964420 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:42:04.964429 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:42:04.964460 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:42:05.023249 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:42:05.023347 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:42:05.042677 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:42:05.042702 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:42:05.113125 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:42:05.104084    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:05.104668    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:05.106408    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:05.106980    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:05.108789    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:42:05.104084    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:05.104668    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:05.106408    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:05.106980    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:05.108789    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:42:05.113151 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:42:05.113167 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:42:05.139072 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:42:05.139109 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 01:42:03.248327 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:42:05.748676 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:42:07.672253 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:42:07.683433 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:42:07.683523 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:42:07.710000 1685746 cri.go:96] found id: ""
	I1222 01:42:07.710025 1685746 logs.go:282] 0 containers: []
	W1222 01:42:07.710033 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:42:07.710040 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:42:07.710129 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:42:07.749657 1685746 cri.go:96] found id: ""
	I1222 01:42:07.749685 1685746 logs.go:282] 0 containers: []
	W1222 01:42:07.749695 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:42:07.749702 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:42:07.749769 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:42:07.779817 1685746 cri.go:96] found id: ""
	I1222 01:42:07.779844 1685746 logs.go:282] 0 containers: []
	W1222 01:42:07.779853 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:42:07.779860 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:42:07.779920 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:42:07.809501 1685746 cri.go:96] found id: ""
	I1222 01:42:07.809529 1685746 logs.go:282] 0 containers: []
	W1222 01:42:07.809538 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:42:07.809546 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:42:07.809606 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:42:07.834291 1685746 cri.go:96] found id: ""
	I1222 01:42:07.834318 1685746 logs.go:282] 0 containers: []
	W1222 01:42:07.834327 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:42:07.834334 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:42:07.834395 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:42:07.859724 1685746 cri.go:96] found id: ""
	I1222 01:42:07.859791 1685746 logs.go:282] 0 containers: []
	W1222 01:42:07.859807 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:42:07.859814 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:42:07.859874 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:42:07.891259 1685746 cri.go:96] found id: ""
	I1222 01:42:07.891287 1685746 logs.go:282] 0 containers: []
	W1222 01:42:07.891296 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:42:07.891303 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:42:07.891362 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:42:07.916371 1685746 cri.go:96] found id: ""
	I1222 01:42:07.916451 1685746 logs.go:282] 0 containers: []
	W1222 01:42:07.916467 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:42:07.916477 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:42:07.916489 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:42:07.943955 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:42:07.943981 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:42:08.000957 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:42:08.001003 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:42:08.021265 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:42:08.021299 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:42:08.098699 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:42:08.089767    7510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:08.090657    7510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:08.092419    7510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:08.093123    7510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:08.094829    7510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:42:08.089767    7510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:08.090657    7510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:08.092419    7510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:08.093123    7510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:08.094829    7510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:42:08.098725 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:42:08.098739 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:42:10.625986 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:42:10.637185 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:42:10.637275 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:42:10.663011 1685746 cri.go:96] found id: ""
	I1222 01:42:10.663039 1685746 logs.go:282] 0 containers: []
	W1222 01:42:10.663048 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:42:10.663055 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:42:10.663121 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:42:10.689593 1685746 cri.go:96] found id: ""
	I1222 01:42:10.689623 1685746 logs.go:282] 0 containers: []
	W1222 01:42:10.689633 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:42:10.689639 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:42:10.689704 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:42:10.718520 1685746 cri.go:96] found id: ""
	I1222 01:42:10.718545 1685746 logs.go:282] 0 containers: []
	W1222 01:42:10.718554 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:42:10.718561 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:42:10.718627 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:42:10.748796 1685746 cri.go:96] found id: ""
	I1222 01:42:10.748829 1685746 logs.go:282] 0 containers: []
	W1222 01:42:10.748839 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:42:10.748846 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:42:10.748919 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:42:10.780456 1685746 cri.go:96] found id: ""
	I1222 01:42:10.780493 1685746 logs.go:282] 0 containers: []
	W1222 01:42:10.780508 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:42:10.780515 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:42:10.780591 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:42:10.810196 1685746 cri.go:96] found id: ""
	I1222 01:42:10.810234 1685746 logs.go:282] 0 containers: []
	W1222 01:42:10.810243 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:42:10.810250 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:42:10.810346 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:42:10.836475 1685746 cri.go:96] found id: ""
	I1222 01:42:10.836502 1685746 logs.go:282] 0 containers: []
	W1222 01:42:10.836511 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:42:10.836518 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:42:10.836582 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:42:10.862222 1685746 cri.go:96] found id: ""
	I1222 01:42:10.862246 1685746 logs.go:282] 0 containers: []
	W1222 01:42:10.862255 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:42:10.862264 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:42:10.862275 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:42:10.918613 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:42:10.918648 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:42:10.933449 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:42:10.933478 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:42:11.013628 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:42:11.003467    7604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:11.004191    7604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:11.006270    7604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:11.007253    7604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:11.009218    7604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:42:11.003467    7604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:11.004191    7604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:11.006270    7604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:11.007253    7604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:11.009218    7604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:42:11.013706 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:42:11.013738 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:42:11.042713 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:42:11.042803 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 01:42:08.248287 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:42:10.748100 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:42:12.748911 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:42:13.581897 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:42:13.592897 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:42:13.592969 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:42:13.621158 1685746 cri.go:96] found id: ""
	I1222 01:42:13.621184 1685746 logs.go:282] 0 containers: []
	W1222 01:42:13.621194 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:42:13.621200 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:42:13.621265 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:42:13.646742 1685746 cri.go:96] found id: ""
	I1222 01:42:13.646769 1685746 logs.go:282] 0 containers: []
	W1222 01:42:13.646778 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:42:13.646784 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:42:13.646843 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:42:13.671981 1685746 cri.go:96] found id: ""
	I1222 01:42:13.672014 1685746 logs.go:282] 0 containers: []
	W1222 01:42:13.672023 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:42:13.672030 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:42:13.672093 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:42:13.697359 1685746 cri.go:96] found id: ""
	I1222 01:42:13.697387 1685746 logs.go:282] 0 containers: []
	W1222 01:42:13.697397 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:42:13.697408 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:42:13.697471 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:42:13.723455 1685746 cri.go:96] found id: ""
	I1222 01:42:13.723481 1685746 logs.go:282] 0 containers: []
	W1222 01:42:13.723491 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:42:13.723499 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:42:13.723560 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:42:13.762227 1685746 cri.go:96] found id: ""
	I1222 01:42:13.762251 1685746 logs.go:282] 0 containers: []
	W1222 01:42:13.762259 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:42:13.762266 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:42:13.762325 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:42:13.792416 1685746 cri.go:96] found id: ""
	I1222 01:42:13.792440 1685746 logs.go:282] 0 containers: []
	W1222 01:42:13.792448 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:42:13.792455 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:42:13.792521 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:42:13.824151 1685746 cri.go:96] found id: ""
	I1222 01:42:13.824178 1685746 logs.go:282] 0 containers: []
	W1222 01:42:13.824188 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:42:13.824227 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:42:13.824251 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:42:13.839610 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:42:13.839639 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:42:13.903103 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:42:13.894591    7715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:13.895392    7715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:13.896942    7715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:13.897426    7715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:13.898945    7715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:42:13.894591    7715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:13.895392    7715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:13.896942    7715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:13.897426    7715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:13.898945    7715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:42:13.903125 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:42:13.903138 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:42:13.928958 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:42:13.928992 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:42:13.959685 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:42:13.959714 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:42:16.518219 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:42:16.529223 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:42:16.529294 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:42:16.555927 1685746 cri.go:96] found id: ""
	I1222 01:42:16.555953 1685746 logs.go:282] 0 containers: []
	W1222 01:42:16.555962 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:42:16.555969 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:42:16.556028 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:42:16.581196 1685746 cri.go:96] found id: ""
	I1222 01:42:16.581223 1685746 logs.go:282] 0 containers: []
	W1222 01:42:16.581233 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:42:16.581240 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:42:16.581303 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:42:16.607543 1685746 cri.go:96] found id: ""
	I1222 01:42:16.607569 1685746 logs.go:282] 0 containers: []
	W1222 01:42:16.607578 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:42:16.607585 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:42:16.607651 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:42:16.637077 1685746 cri.go:96] found id: ""
	I1222 01:42:16.637106 1685746 logs.go:282] 0 containers: []
	W1222 01:42:16.637116 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:42:16.637123 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:42:16.637183 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:42:16.662155 1685746 cri.go:96] found id: ""
	I1222 01:42:16.662178 1685746 logs.go:282] 0 containers: []
	W1222 01:42:16.662187 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:42:16.662193 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:42:16.662257 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	W1222 01:42:14.749008 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:42:17.249086 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:42:16.694483 1685746 cri.go:96] found id: ""
	I1222 01:42:16.694507 1685746 logs.go:282] 0 containers: []
	W1222 01:42:16.694516 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:42:16.694523 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:42:16.694582 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:42:16.719153 1685746 cri.go:96] found id: ""
	I1222 01:42:16.719178 1685746 logs.go:282] 0 containers: []
	W1222 01:42:16.719188 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:42:16.719195 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:42:16.719258 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:42:16.750982 1685746 cri.go:96] found id: ""
	I1222 01:42:16.751007 1685746 logs.go:282] 0 containers: []
	W1222 01:42:16.751017 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:42:16.751026 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:42:16.751038 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:42:16.809848 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:42:16.809888 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:42:16.828821 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:42:16.828852 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:42:16.896032 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:42:16.888151    7829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:16.888893    7829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:16.890527    7829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:16.890859    7829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:16.892403    7829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:42:16.888151    7829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:16.888893    7829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:16.890527    7829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:16.890859    7829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:16.892403    7829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:42:16.896058 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:42:16.896071 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:42:16.921650 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:42:16.921686 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:42:19.450391 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:42:19.461241 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:42:19.461314 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:42:19.488679 1685746 cri.go:96] found id: ""
	I1222 01:42:19.488705 1685746 logs.go:282] 0 containers: []
	W1222 01:42:19.488715 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:42:19.488722 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:42:19.488784 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:42:19.514947 1685746 cri.go:96] found id: ""
	I1222 01:42:19.514972 1685746 logs.go:282] 0 containers: []
	W1222 01:42:19.514982 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:42:19.514989 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:42:19.515051 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:42:19.541761 1685746 cri.go:96] found id: ""
	I1222 01:42:19.541786 1685746 logs.go:282] 0 containers: []
	W1222 01:42:19.541795 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:42:19.541802 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:42:19.541867 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:42:19.566418 1685746 cri.go:96] found id: ""
	I1222 01:42:19.566441 1685746 logs.go:282] 0 containers: []
	W1222 01:42:19.566450 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:42:19.566456 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:42:19.566515 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:42:19.591707 1685746 cri.go:96] found id: ""
	I1222 01:42:19.591739 1685746 logs.go:282] 0 containers: []
	W1222 01:42:19.591748 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:42:19.591754 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:42:19.591857 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:42:19.618308 1685746 cri.go:96] found id: ""
	I1222 01:42:19.618343 1685746 logs.go:282] 0 containers: []
	W1222 01:42:19.618352 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:42:19.618362 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:42:19.618441 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:42:19.644750 1685746 cri.go:96] found id: ""
	I1222 01:42:19.644791 1685746 logs.go:282] 0 containers: []
	W1222 01:42:19.644801 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:42:19.644808 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:42:19.644883 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:42:19.674267 1685746 cri.go:96] found id: ""
	I1222 01:42:19.674295 1685746 logs.go:282] 0 containers: []
	W1222 01:42:19.674304 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:42:19.674315 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:42:19.674327 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:42:19.689360 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:42:19.689445 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:42:19.766188 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:42:19.757030    7943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:19.758928    7943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:19.760510    7943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:19.760846    7943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:19.762319    7943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:42:19.757030    7943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:19.758928    7943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:19.760510    7943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:19.760846    7943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:19.762319    7943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:42:19.766263 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:42:19.766290 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:42:19.793580 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:42:19.793657 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:42:19.829853 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:42:19.829884 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1222 01:42:19.748284 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:42:22.248100 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:42:22.388471 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:42:22.399089 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:42:22.399192 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:42:22.428498 1685746 cri.go:96] found id: ""
	I1222 01:42:22.428569 1685746 logs.go:282] 0 containers: []
	W1222 01:42:22.428583 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:42:22.428591 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:42:22.428672 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:42:22.458145 1685746 cri.go:96] found id: ""
	I1222 01:42:22.458182 1685746 logs.go:282] 0 containers: []
	W1222 01:42:22.458196 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:42:22.458203 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:42:22.458276 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:42:22.485165 1685746 cri.go:96] found id: ""
	I1222 01:42:22.485202 1685746 logs.go:282] 0 containers: []
	W1222 01:42:22.485212 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:42:22.485218 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:42:22.485283 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:42:22.510263 1685746 cri.go:96] found id: ""
	I1222 01:42:22.510292 1685746 logs.go:282] 0 containers: []
	W1222 01:42:22.510302 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:42:22.510308 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:42:22.510374 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:42:22.539347 1685746 cri.go:96] found id: ""
	I1222 01:42:22.539374 1685746 logs.go:282] 0 containers: []
	W1222 01:42:22.539383 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:42:22.539391 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:42:22.539453 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:42:22.564154 1685746 cri.go:96] found id: ""
	I1222 01:42:22.564182 1685746 logs.go:282] 0 containers: []
	W1222 01:42:22.564193 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:42:22.564205 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:42:22.564311 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:42:22.593661 1685746 cri.go:96] found id: ""
	I1222 01:42:22.593688 1685746 logs.go:282] 0 containers: []
	W1222 01:42:22.593697 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:42:22.593703 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:42:22.593767 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:42:22.618629 1685746 cri.go:96] found id: ""
	I1222 01:42:22.618654 1685746 logs.go:282] 0 containers: []
	W1222 01:42:22.618663 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:42:22.618672 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:42:22.618714 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:42:22.675019 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:42:22.675057 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:42:22.690208 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:42:22.690241 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:42:22.759102 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:42:22.749007    8059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:22.749809    8059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:22.751450    8059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:22.752099    8059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:22.753804    8059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:42:22.749007    8059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:22.749809    8059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:22.751450    8059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:22.752099    8059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:22.753804    8059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:42:22.759127 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:42:22.759140 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:42:22.790419 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:42:22.790453 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:42:25.330239 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:42:25.341121 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:42:25.341190 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:42:25.370417 1685746 cri.go:96] found id: ""
	I1222 01:42:25.370493 1685746 logs.go:282] 0 containers: []
	W1222 01:42:25.370523 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:42:25.370543 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:42:25.370636 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:42:25.399975 1685746 cri.go:96] found id: ""
	I1222 01:42:25.400000 1685746 logs.go:282] 0 containers: []
	W1222 01:42:25.400009 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:42:25.400015 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:42:25.400075 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:42:25.424384 1685746 cri.go:96] found id: ""
	I1222 01:42:25.424414 1685746 logs.go:282] 0 containers: []
	W1222 01:42:25.424424 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:42:25.424431 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:42:25.424491 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:42:25.453828 1685746 cri.go:96] found id: ""
	I1222 01:42:25.453916 1685746 logs.go:282] 0 containers: []
	W1222 01:42:25.453956 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:42:25.453984 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:42:25.454124 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:42:25.480847 1685746 cri.go:96] found id: ""
	I1222 01:42:25.480868 1685746 logs.go:282] 0 containers: []
	W1222 01:42:25.480877 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:42:25.480883 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:42:25.480942 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:42:25.508776 1685746 cri.go:96] found id: ""
	I1222 01:42:25.508801 1685746 logs.go:282] 0 containers: []
	W1222 01:42:25.508810 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:42:25.508817 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:42:25.508877 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:42:25.539362 1685746 cri.go:96] found id: ""
	I1222 01:42:25.539385 1685746 logs.go:282] 0 containers: []
	W1222 01:42:25.539396 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:42:25.539402 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:42:25.539461 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:42:25.566615 1685746 cri.go:96] found id: ""
	I1222 01:42:25.566641 1685746 logs.go:282] 0 containers: []
	W1222 01:42:25.566650 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:42:25.566659 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:42:25.566670 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:42:25.622750 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:42:25.622784 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:42:25.638693 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:42:25.638728 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:42:25.702796 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:42:25.695589    8173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:25.695998    8173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:25.697467    8173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:25.697800    8173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:25.699229    8173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:42:25.695589    8173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:25.695998    8173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:25.697467    8173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:25.697800    8173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:25.699229    8173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:42:25.702823 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:42:25.702835 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:42:25.727901 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:42:25.727938 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 01:42:24.248221 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:42:26.748069 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:42:29.248000 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:42:31.247763 1681323 node_ready.go:38] duration metric: took 6m0.000217195s for node "no-preload-154186" to be "Ready" ...
	I1222 01:42:31.251066 1681323 out.go:203] 
	W1222 01:42:31.253946 1681323 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1222 01:42:31.253969 1681323 out.go:285] * 
	W1222 01:42:31.256107 1681323 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1222 01:42:31.259342 1681323 out.go:203] 
	I1222 01:42:28.269113 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:42:28.280220 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:42:28.280317 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:42:28.305926 1685746 cri.go:96] found id: ""
	I1222 01:42:28.305948 1685746 logs.go:282] 0 containers: []
	W1222 01:42:28.305957 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:42:28.305963 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:42:28.306020 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:42:28.330985 1685746 cri.go:96] found id: ""
	I1222 01:42:28.331010 1685746 logs.go:282] 0 containers: []
	W1222 01:42:28.331020 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:42:28.331026 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:42:28.331086 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:42:28.357992 1685746 cri.go:96] found id: ""
	I1222 01:42:28.358018 1685746 logs.go:282] 0 containers: []
	W1222 01:42:28.358028 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:42:28.358035 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:42:28.358131 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:42:28.384559 1685746 cri.go:96] found id: ""
	I1222 01:42:28.384585 1685746 logs.go:282] 0 containers: []
	W1222 01:42:28.384594 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:42:28.384603 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:42:28.384665 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:42:28.412628 1685746 cri.go:96] found id: ""
	I1222 01:42:28.412650 1685746 logs.go:282] 0 containers: []
	W1222 01:42:28.412659 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:42:28.412665 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:42:28.412731 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:42:28.438582 1685746 cri.go:96] found id: ""
	I1222 01:42:28.438605 1685746 logs.go:282] 0 containers: []
	W1222 01:42:28.438613 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:42:28.438620 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:42:28.438685 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:42:28.468458 1685746 cri.go:96] found id: ""
	I1222 01:42:28.468484 1685746 logs.go:282] 0 containers: []
	W1222 01:42:28.468493 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:42:28.468500 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:42:28.468565 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:42:28.493207 1685746 cri.go:96] found id: ""
	I1222 01:42:28.493231 1685746 logs.go:282] 0 containers: []
	W1222 01:42:28.493239 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:42:28.493249 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:42:28.493260 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:42:28.547741 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:42:28.547777 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:42:28.562578 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:42:28.562608 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:42:28.637227 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:42:28.629669    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:28.630219    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:28.631680    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:28.632052    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:28.633501    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:42:28.629669    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:28.630219    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:28.631680    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:28.632052    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:28.633501    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:42:28.637250 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:42:28.637263 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:42:28.662593 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:42:28.662632 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:42:31.190941 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:42:31.202783 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:42:31.202858 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:42:31.227601 1685746 cri.go:96] found id: ""
	I1222 01:42:31.227625 1685746 logs.go:282] 0 containers: []
	W1222 01:42:31.227633 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:42:31.227642 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:42:31.227718 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:42:31.267011 1685746 cri.go:96] found id: ""
	I1222 01:42:31.267040 1685746 logs.go:282] 0 containers: []
	W1222 01:42:31.267049 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:42:31.267056 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:42:31.267118 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:42:31.363207 1685746 cri.go:96] found id: ""
	I1222 01:42:31.363231 1685746 logs.go:282] 0 containers: []
	W1222 01:42:31.363239 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:42:31.363246 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:42:31.363320 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:42:31.412753 1685746 cri.go:96] found id: ""
	I1222 01:42:31.412780 1685746 logs.go:282] 0 containers: []
	W1222 01:42:31.412788 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:42:31.412796 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:42:31.412858 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:42:31.453115 1685746 cri.go:96] found id: ""
	I1222 01:42:31.453145 1685746 logs.go:282] 0 containers: []
	W1222 01:42:31.453154 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:42:31.453167 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:42:31.453225 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:42:31.492529 1685746 cri.go:96] found id: ""
	I1222 01:42:31.492550 1685746 logs.go:282] 0 containers: []
	W1222 01:42:31.492558 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:42:31.492565 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:42:31.492621 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:42:31.529156 1685746 cri.go:96] found id: ""
	I1222 01:42:31.529179 1685746 logs.go:282] 0 containers: []
	W1222 01:42:31.529187 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:42:31.529193 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:42:31.529252 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:42:31.561255 1685746 cri.go:96] found id: ""
	I1222 01:42:31.561283 1685746 logs.go:282] 0 containers: []
	W1222 01:42:31.561292 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:42:31.561301 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:42:31.561314 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:42:31.622500 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:42:31.622526 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:42:31.690749 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:42:31.690784 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:42:31.706062 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:42:31.706182 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:42:31.827329 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:42:31.792352    8407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:31.804228    8407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:31.804969    8407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:31.820004    8407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:31.820674    8407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:42:31.792352    8407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:31.804228    8407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:31.804969    8407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:31.820004    8407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:31.820674    8407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:42:31.827354 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:42:31.827369 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:42:34.368888 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:42:34.380077 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:42:34.380154 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:42:34.406174 1685746 cri.go:96] found id: ""
	I1222 01:42:34.406198 1685746 logs.go:282] 0 containers: []
	W1222 01:42:34.406207 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:42:34.406213 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:42:34.406280 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:42:34.437127 1685746 cri.go:96] found id: ""
	I1222 01:42:34.437152 1685746 logs.go:282] 0 containers: []
	W1222 01:42:34.437161 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:42:34.437168 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:42:34.437234 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:42:34.462419 1685746 cri.go:96] found id: ""
	I1222 01:42:34.462445 1685746 logs.go:282] 0 containers: []
	W1222 01:42:34.462454 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:42:34.462463 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:42:34.462524 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:42:34.491011 1685746 cri.go:96] found id: ""
	I1222 01:42:34.491039 1685746 logs.go:282] 0 containers: []
	W1222 01:42:34.491049 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:42:34.491056 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:42:34.491117 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:42:34.515544 1685746 cri.go:96] found id: ""
	I1222 01:42:34.515570 1685746 logs.go:282] 0 containers: []
	W1222 01:42:34.515580 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:42:34.515587 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:42:34.515644 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:42:34.543686 1685746 cri.go:96] found id: ""
	I1222 01:42:34.543714 1685746 logs.go:282] 0 containers: []
	W1222 01:42:34.543722 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:42:34.543730 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:42:34.543788 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:42:34.572402 1685746 cri.go:96] found id: ""
	I1222 01:42:34.572427 1685746 logs.go:282] 0 containers: []
	W1222 01:42:34.572436 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:42:34.572442 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:42:34.572561 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:42:34.597762 1685746 cri.go:96] found id: ""
	I1222 01:42:34.597789 1685746 logs.go:282] 0 containers: []
	W1222 01:42:34.597799 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:42:34.597808 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:42:34.597820 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:42:34.622955 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:42:34.622991 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:42:34.651563 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:42:34.651592 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:42:34.708102 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:42:34.708139 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:42:34.723329 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:42:34.723358 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:42:34.788870 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:42:34.780572    8523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:34.781230    8523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:34.782852    8523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:34.783472    8523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:34.785097    8523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:42:34.780572    8523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:34.781230    8523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:34.782852    8523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:34.783472    8523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:34.785097    8523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:42:37.289033 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:42:37.307914 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:42:37.308010 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:42:37.342876 1685746 cri.go:96] found id: ""
	I1222 01:42:37.342916 1685746 logs.go:282] 0 containers: []
	W1222 01:42:37.342925 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:42:37.342932 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:42:37.342994 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:42:37.369883 1685746 cri.go:96] found id: ""
	I1222 01:42:37.369912 1685746 logs.go:282] 0 containers: []
	W1222 01:42:37.369921 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:42:37.369928 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:42:37.369990 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:42:37.399765 1685746 cri.go:96] found id: ""
	I1222 01:42:37.399792 1685746 logs.go:282] 0 containers: []
	W1222 01:42:37.399800 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:42:37.399807 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:42:37.399887 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:42:37.425866 1685746 cri.go:96] found id: ""
	I1222 01:42:37.425894 1685746 logs.go:282] 0 containers: []
	W1222 01:42:37.425904 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:42:37.425911 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:42:37.425976 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:42:37.452177 1685746 cri.go:96] found id: ""
	I1222 01:42:37.452252 1685746 logs.go:282] 0 containers: []
	W1222 01:42:37.452273 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:42:37.452280 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:42:37.452349 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:42:37.478374 1685746 cri.go:96] found id: ""
	I1222 01:42:37.478405 1685746 logs.go:282] 0 containers: []
	W1222 01:42:37.478415 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:42:37.478421 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:42:37.478482 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:42:37.504627 1685746 cri.go:96] found id: ""
	I1222 01:42:37.504663 1685746 logs.go:282] 0 containers: []
	W1222 01:42:37.504672 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:42:37.504679 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:42:37.504785 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:42:37.531304 1685746 cri.go:96] found id: ""
	I1222 01:42:37.531343 1685746 logs.go:282] 0 containers: []
	W1222 01:42:37.531353 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:42:37.531380 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:42:37.531399 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:42:37.559371 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:42:37.559401 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:42:37.614026 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:42:37.614064 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:42:37.630657 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:42:37.630689 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:42:37.698972 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:42:37.690733    8633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:37.691573    8633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:37.693410    8633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:37.694215    8633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:37.695368    8633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:42:37.690733    8633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:37.691573    8633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:37.693410    8633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:37.694215    8633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:37.695368    8633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:42:37.698998 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:42:37.699010 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:42:40.226630 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:42:40.251806 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:42:40.251880 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:42:40.312461 1685746 cri.go:96] found id: ""
	I1222 01:42:40.312484 1685746 logs.go:282] 0 containers: []
	W1222 01:42:40.312493 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:42:40.312499 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:42:40.312559 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:42:40.346654 1685746 cri.go:96] found id: ""
	I1222 01:42:40.346682 1685746 logs.go:282] 0 containers: []
	W1222 01:42:40.346691 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:42:40.346697 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:42:40.346757 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:42:40.376245 1685746 cri.go:96] found id: ""
	I1222 01:42:40.376279 1685746 logs.go:282] 0 containers: []
	W1222 01:42:40.376288 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:42:40.376294 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:42:40.376355 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:42:40.400546 1685746 cri.go:96] found id: ""
	I1222 01:42:40.400572 1685746 logs.go:282] 0 containers: []
	W1222 01:42:40.400581 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:42:40.400588 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:42:40.400647 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:42:40.425326 1685746 cri.go:96] found id: ""
	I1222 01:42:40.425353 1685746 logs.go:282] 0 containers: []
	W1222 01:42:40.425362 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:42:40.425369 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:42:40.425431 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:42:40.449304 1685746 cri.go:96] found id: ""
	I1222 01:42:40.449328 1685746 logs.go:282] 0 containers: []
	W1222 01:42:40.449337 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:42:40.449345 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:42:40.449405 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:42:40.474828 1685746 cri.go:96] found id: ""
	I1222 01:42:40.474854 1685746 logs.go:282] 0 containers: []
	W1222 01:42:40.474863 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:42:40.474870 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:42:40.474931 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:42:40.503909 1685746 cri.go:96] found id: ""
	I1222 01:42:40.503933 1685746 logs.go:282] 0 containers: []
	W1222 01:42:40.503941 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:42:40.503950 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:42:40.503960 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:42:40.559784 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:42:40.559821 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:42:40.575010 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:42:40.575041 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:42:40.643863 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:42:40.635244    8736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:40.635950    8736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:40.637730    8736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:40.638261    8736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:40.639693    8736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:42:40.635244    8736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:40.635950    8736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:40.637730    8736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:40.638261    8736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:40.639693    8736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:42:40.643888 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:42:40.643900 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:42:40.674641 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:42:40.674683 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:42:43.208931 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:42:43.219892 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:42:43.219965 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:42:43.278356 1685746 cri.go:96] found id: ""
	I1222 01:42:43.278383 1685746 logs.go:282] 0 containers: []
	W1222 01:42:43.278393 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:42:43.278399 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:42:43.278468 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:42:43.318802 1685746 cri.go:96] found id: ""
	I1222 01:42:43.318828 1685746 logs.go:282] 0 containers: []
	W1222 01:42:43.318838 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:42:43.318844 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:42:43.318903 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:42:43.351222 1685746 cri.go:96] found id: ""
	I1222 01:42:43.351247 1685746 logs.go:282] 0 containers: []
	W1222 01:42:43.351256 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:42:43.351263 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:42:43.351323 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:42:43.377242 1685746 cri.go:96] found id: ""
	I1222 01:42:43.377267 1685746 logs.go:282] 0 containers: []
	W1222 01:42:43.377275 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:42:43.377282 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:42:43.377346 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:42:43.403326 1685746 cri.go:96] found id: ""
	I1222 01:42:43.403353 1685746 logs.go:282] 0 containers: []
	W1222 01:42:43.403363 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:42:43.403370 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:42:43.403459 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:42:43.429205 1685746 cri.go:96] found id: ""
	I1222 01:42:43.429232 1685746 logs.go:282] 0 containers: []
	W1222 01:42:43.429241 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:42:43.429248 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:42:43.429351 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:42:43.455157 1685746 cri.go:96] found id: ""
	I1222 01:42:43.455188 1685746 logs.go:282] 0 containers: []
	W1222 01:42:43.455198 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:42:43.455204 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:42:43.455274 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:42:43.484817 1685746 cri.go:96] found id: ""
	I1222 01:42:43.484846 1685746 logs.go:282] 0 containers: []
	W1222 01:42:43.484856 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:42:43.484866 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:42:43.484877 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:42:43.544248 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:42:43.544285 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:42:43.559152 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:42:43.559184 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:42:43.623520 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:42:43.614277    8850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:43.615133    8850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:43.616676    8850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:43.617246    8850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:43.619047    8850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:42:43.614277    8850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:43.615133    8850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:43.616676    8850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:43.617246    8850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:43.619047    8850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:42:43.623546 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:42:43.623559 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:42:43.648911 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:42:43.648951 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:42:46.182386 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:42:46.193692 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:42:46.193766 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:42:46.219554 1685746 cri.go:96] found id: ""
	I1222 01:42:46.219592 1685746 logs.go:282] 0 containers: []
	W1222 01:42:46.219602 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:42:46.219608 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:42:46.219667 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:42:46.269097 1685746 cri.go:96] found id: ""
	I1222 01:42:46.269128 1685746 logs.go:282] 0 containers: []
	W1222 01:42:46.269137 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:42:46.269152 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:42:46.269215 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:42:46.315573 1685746 cri.go:96] found id: ""
	I1222 01:42:46.315609 1685746 logs.go:282] 0 containers: []
	W1222 01:42:46.315619 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:42:46.315627 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:42:46.315698 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:42:46.354254 1685746 cri.go:96] found id: ""
	I1222 01:42:46.354291 1685746 logs.go:282] 0 containers: []
	W1222 01:42:46.354300 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:42:46.354311 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:42:46.354385 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:42:46.382733 1685746 cri.go:96] found id: ""
	I1222 01:42:46.382810 1685746 logs.go:282] 0 containers: []
	W1222 01:42:46.382823 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:42:46.382831 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:42:46.382893 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:42:46.409988 1685746 cri.go:96] found id: ""
	I1222 01:42:46.410014 1685746 logs.go:282] 0 containers: []
	W1222 01:42:46.410024 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:42:46.410032 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:42:46.410123 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:42:46.440621 1685746 cri.go:96] found id: ""
	I1222 01:42:46.440645 1685746 logs.go:282] 0 containers: []
	W1222 01:42:46.440654 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:42:46.440661 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:42:46.440726 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:42:46.466426 1685746 cri.go:96] found id: ""
	I1222 01:42:46.466451 1685746 logs.go:282] 0 containers: []
	W1222 01:42:46.466461 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:42:46.466478 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:42:46.466491 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:42:46.522404 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:42:46.522449 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:42:46.538001 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:42:46.538129 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:42:46.608273 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:42:46.599659    8965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:46.600513    8965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:46.602269    8965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:46.602918    8965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:46.604666    8965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:42:46.599659    8965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:46.600513    8965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:46.602269    8965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:46.602918    8965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:46.604666    8965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:42:46.608296 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:42:46.608311 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:42:46.634354 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:42:46.634388 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:42:49.167965 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:42:49.178919 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:42:49.178992 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:42:49.204884 1685746 cri.go:96] found id: ""
	I1222 01:42:49.204909 1685746 logs.go:282] 0 containers: []
	W1222 01:42:49.204917 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:42:49.204924 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:42:49.204992 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:42:49.231503 1685746 cri.go:96] found id: ""
	I1222 01:42:49.231530 1685746 logs.go:282] 0 containers: []
	W1222 01:42:49.231539 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:42:49.231547 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:42:49.231611 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:42:49.274476 1685746 cri.go:96] found id: ""
	I1222 01:42:49.274500 1685746 logs.go:282] 0 containers: []
	W1222 01:42:49.274508 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:42:49.274515 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:42:49.274577 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:42:49.318032 1685746 cri.go:96] found id: ""
	I1222 01:42:49.318054 1685746 logs.go:282] 0 containers: []
	W1222 01:42:49.318063 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:42:49.318069 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:42:49.318163 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:42:49.361375 1685746 cri.go:96] found id: ""
	I1222 01:42:49.361398 1685746 logs.go:282] 0 containers: []
	W1222 01:42:49.361407 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:42:49.361414 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:42:49.361475 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:42:49.389203 1685746 cri.go:96] found id: ""
	I1222 01:42:49.389230 1685746 logs.go:282] 0 containers: []
	W1222 01:42:49.389240 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:42:49.389247 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:42:49.389315 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:42:49.419554 1685746 cri.go:96] found id: ""
	I1222 01:42:49.419579 1685746 logs.go:282] 0 containers: []
	W1222 01:42:49.419588 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:42:49.419595 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:42:49.419656 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:42:49.448457 1685746 cri.go:96] found id: ""
	I1222 01:42:49.448482 1685746 logs.go:282] 0 containers: []
	W1222 01:42:49.448491 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:42:49.448501 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:42:49.448513 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:42:49.477586 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:42:49.477616 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:42:49.534782 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:42:49.534822 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:42:49.550136 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:42:49.550166 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:42:49.618143 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:42:49.609211    9092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:49.610126    9092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:49.611985    9092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:49.612723    9092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:49.614504    9092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:42:49.609211    9092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:49.610126    9092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:49.611985    9092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:49.612723    9092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:49.614504    9092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:42:49.618169 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:42:49.618190 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:42:52.144370 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:42:52.155874 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:42:52.155999 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:42:52.183608 1685746 cri.go:96] found id: ""
	I1222 01:42:52.183633 1685746 logs.go:282] 0 containers: []
	W1222 01:42:52.183641 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:42:52.183648 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:42:52.183710 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:42:52.213975 1685746 cri.go:96] found id: ""
	I1222 01:42:52.214002 1685746 logs.go:282] 0 containers: []
	W1222 01:42:52.214011 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:42:52.214018 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:42:52.214108 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:42:52.260878 1685746 cri.go:96] found id: ""
	I1222 01:42:52.260904 1685746 logs.go:282] 0 containers: []
	W1222 01:42:52.260913 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:42:52.260920 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:42:52.260986 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:42:52.326163 1685746 cri.go:96] found id: ""
	I1222 01:42:52.326191 1685746 logs.go:282] 0 containers: []
	W1222 01:42:52.326200 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:42:52.326206 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:42:52.326268 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:42:52.351586 1685746 cri.go:96] found id: ""
	I1222 01:42:52.351610 1685746 logs.go:282] 0 containers: []
	W1222 01:42:52.351619 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:42:52.351625 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:42:52.351685 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:42:52.378191 1685746 cri.go:96] found id: ""
	I1222 01:42:52.378271 1685746 logs.go:282] 0 containers: []
	W1222 01:42:52.378297 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:42:52.378320 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:42:52.378423 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:42:52.403988 1685746 cri.go:96] found id: ""
	I1222 01:42:52.404014 1685746 logs.go:282] 0 containers: []
	W1222 01:42:52.404024 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:42:52.404030 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:42:52.404115 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:42:52.434842 1685746 cri.go:96] found id: ""
	I1222 01:42:52.434870 1685746 logs.go:282] 0 containers: []
	W1222 01:42:52.434879 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:42:52.434888 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:42:52.434901 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:42:52.493615 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:42:52.493659 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:42:52.509970 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:42:52.510008 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:42:52.573713 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:42:52.565508    9193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:52.566173    9193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:52.567830    9193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:52.568498    9193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:52.570110    9193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:42:52.565508    9193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:52.566173    9193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:52.567830    9193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:52.568498    9193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:52.570110    9193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:42:52.573748 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:42:52.573760 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:42:52.598497 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:42:52.598532 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:42:55.130037 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:42:55.141017 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:42:55.141094 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:42:55.166253 1685746 cri.go:96] found id: ""
	I1222 01:42:55.166279 1685746 logs.go:282] 0 containers: []
	W1222 01:42:55.166289 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:42:55.166298 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:42:55.166358 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:42:55.190818 1685746 cri.go:96] found id: ""
	I1222 01:42:55.190844 1685746 logs.go:282] 0 containers: []
	W1222 01:42:55.190856 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:42:55.190863 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:42:55.190969 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:42:55.216347 1685746 cri.go:96] found id: ""
	I1222 01:42:55.216380 1685746 logs.go:282] 0 containers: []
	W1222 01:42:55.216390 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:42:55.216397 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:42:55.216501 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:42:55.259015 1685746 cri.go:96] found id: ""
	I1222 01:42:55.259091 1685746 logs.go:282] 0 containers: []
	W1222 01:42:55.259115 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:42:55.259135 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:42:55.259247 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:42:55.326026 1685746 cri.go:96] found id: ""
	I1222 01:42:55.326049 1685746 logs.go:282] 0 containers: []
	W1222 01:42:55.326058 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:42:55.326065 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:42:55.326147 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:42:55.350799 1685746 cri.go:96] found id: ""
	I1222 01:42:55.350823 1685746 logs.go:282] 0 containers: []
	W1222 01:42:55.350832 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:42:55.350839 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:42:55.350899 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:42:55.376097 1685746 cri.go:96] found id: ""
	I1222 01:42:55.376123 1685746 logs.go:282] 0 containers: []
	W1222 01:42:55.376133 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:42:55.376139 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:42:55.376200 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:42:55.401620 1685746 cri.go:96] found id: ""
	I1222 01:42:55.401693 1685746 logs.go:282] 0 containers: []
	W1222 01:42:55.401715 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:42:55.401740 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:42:55.401783 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:42:55.434315 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:42:55.434343 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:42:55.489616 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:42:55.489652 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:42:55.504798 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:42:55.504829 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:42:55.569246 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:42:55.560833    9315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:55.561495    9315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:55.563008    9315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:55.563457    9315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:55.564945    9315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:42:55.560833    9315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:55.561495    9315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:55.563008    9315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:55.563457    9315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:55.564945    9315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:42:55.569273 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:42:55.569285 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:42:58.094905 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:42:58.105827 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:42:58.105902 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:42:58.131496 1685746 cri.go:96] found id: ""
	I1222 01:42:58.131522 1685746 logs.go:282] 0 containers: []
	W1222 01:42:58.131531 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:42:58.131538 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:42:58.131602 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:42:58.156152 1685746 cri.go:96] found id: ""
	I1222 01:42:58.156179 1685746 logs.go:282] 0 containers: []
	W1222 01:42:58.156188 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:42:58.156195 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:42:58.156253 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:42:58.182075 1685746 cri.go:96] found id: ""
	I1222 01:42:58.182124 1685746 logs.go:282] 0 containers: []
	W1222 01:42:58.182140 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:42:58.182147 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:42:58.182211 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:42:58.212714 1685746 cri.go:96] found id: ""
	I1222 01:42:58.212737 1685746 logs.go:282] 0 containers: []
	W1222 01:42:58.212746 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:42:58.212752 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:42:58.212811 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:42:58.256896 1685746 cri.go:96] found id: ""
	I1222 01:42:58.256919 1685746 logs.go:282] 0 containers: []
	W1222 01:42:58.256931 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:42:58.256938 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:42:58.257002 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:42:58.314212 1685746 cri.go:96] found id: ""
	I1222 01:42:58.314235 1685746 logs.go:282] 0 containers: []
	W1222 01:42:58.314243 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:42:58.314250 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:42:58.314311 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:42:58.348822 1685746 cri.go:96] found id: ""
	I1222 01:42:58.348844 1685746 logs.go:282] 0 containers: []
	W1222 01:42:58.348853 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:42:58.348860 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:42:58.349006 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:42:58.375112 1685746 cri.go:96] found id: ""
	I1222 01:42:58.375139 1685746 logs.go:282] 0 containers: []
	W1222 01:42:58.375148 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:42:58.375157 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:42:58.375199 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:42:58.440769 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:42:58.432188    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:58.432720    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:58.434403    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:58.435024    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:58.436797    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:42:58.432188    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:58.432720    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:58.434403    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:58.435024    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:58.436797    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:42:58.440793 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:42:58.440807 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:42:58.466180 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:42:58.466214 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:42:58.498249 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:42:58.498277 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:42:58.553912 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:42:58.553948 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:43:01.069587 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:43:01.080494 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:43:01.080569 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:43:01.106366 1685746 cri.go:96] found id: ""
	I1222 01:43:01.106393 1685746 logs.go:282] 0 containers: []
	W1222 01:43:01.106403 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:43:01.106409 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:43:01.106472 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:43:01.134991 1685746 cri.go:96] found id: ""
	I1222 01:43:01.135019 1685746 logs.go:282] 0 containers: []
	W1222 01:43:01.135028 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:43:01.135040 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:43:01.135108 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:43:01.161160 1685746 cri.go:96] found id: ""
	I1222 01:43:01.161188 1685746 logs.go:282] 0 containers: []
	W1222 01:43:01.161198 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:43:01.161205 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:43:01.161268 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:43:01.189244 1685746 cri.go:96] found id: ""
	I1222 01:43:01.189271 1685746 logs.go:282] 0 containers: []
	W1222 01:43:01.189281 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:43:01.189288 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:43:01.189353 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:43:01.216039 1685746 cri.go:96] found id: ""
	I1222 01:43:01.216109 1685746 logs.go:282] 0 containers: []
	W1222 01:43:01.216123 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:43:01.216131 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:43:01.216206 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:43:01.255772 1685746 cri.go:96] found id: ""
	I1222 01:43:01.255803 1685746 logs.go:282] 0 containers: []
	W1222 01:43:01.255812 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:43:01.255818 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:43:01.255880 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:43:01.331745 1685746 cri.go:96] found id: ""
	I1222 01:43:01.331771 1685746 logs.go:282] 0 containers: []
	W1222 01:43:01.331780 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:43:01.331787 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:43:01.331856 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:43:01.360958 1685746 cri.go:96] found id: ""
	I1222 01:43:01.360985 1685746 logs.go:282] 0 containers: []
	W1222 01:43:01.360995 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:43:01.361003 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:43:01.361014 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:43:01.416443 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:43:01.416479 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:43:01.433706 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:43:01.433735 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:43:01.504365 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:43:01.496722    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:01.497337    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:01.498577    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:01.499029    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:01.500569    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:43:01.496722    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:01.497337    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:01.498577    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:01.499029    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:01.500569    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:43:01.504393 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:43:01.504405 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:43:01.530386 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:43:01.530421 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:43:04.060702 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:43:04.074701 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:43:04.074781 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:43:04.104007 1685746 cri.go:96] found id: ""
	I1222 01:43:04.104034 1685746 logs.go:282] 0 containers: []
	W1222 01:43:04.104043 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:43:04.104050 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:43:04.104110 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:43:04.129051 1685746 cri.go:96] found id: ""
	I1222 01:43:04.129081 1685746 logs.go:282] 0 containers: []
	W1222 01:43:04.129091 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:43:04.129098 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:43:04.129160 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:43:04.155234 1685746 cri.go:96] found id: ""
	I1222 01:43:04.155260 1685746 logs.go:282] 0 containers: []
	W1222 01:43:04.155275 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:43:04.155282 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:43:04.155344 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:43:04.180095 1685746 cri.go:96] found id: ""
	I1222 01:43:04.180120 1685746 logs.go:282] 0 containers: []
	W1222 01:43:04.180130 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:43:04.180137 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:43:04.180199 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:43:04.204953 1685746 cri.go:96] found id: ""
	I1222 01:43:04.204976 1685746 logs.go:282] 0 containers: []
	W1222 01:43:04.204984 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:43:04.204991 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:43:04.205052 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:43:04.231351 1685746 cri.go:96] found id: ""
	I1222 01:43:04.231376 1685746 logs.go:282] 0 containers: []
	W1222 01:43:04.231385 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:43:04.231392 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:43:04.231452 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:43:04.269450 1685746 cri.go:96] found id: ""
	I1222 01:43:04.269476 1685746 logs.go:282] 0 containers: []
	W1222 01:43:04.269485 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:43:04.269492 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:43:04.269556 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:43:04.310137 1685746 cri.go:96] found id: ""
	I1222 01:43:04.310210 1685746 logs.go:282] 0 containers: []
	W1222 01:43:04.310247 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:43:04.310276 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:43:04.310304 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:43:04.330066 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:43:04.330204 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:43:04.398531 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:43:04.389653    9635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:04.390276    9635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:04.391374    9635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:04.392896    9635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:04.393293    9635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:43:04.389653    9635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:04.390276    9635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:04.391374    9635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:04.392896    9635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:04.393293    9635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:43:04.398600 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:43:04.398622 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:43:04.423684 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:43:04.423715 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:43:04.455847 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:43:04.455915 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:43:07.011267 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:43:07.022247 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:43:07.022373 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:43:07.047710 1685746 cri.go:96] found id: ""
	I1222 01:43:07.047737 1685746 logs.go:282] 0 containers: []
	W1222 01:43:07.047746 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:43:07.047755 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:43:07.047817 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:43:07.071622 1685746 cri.go:96] found id: ""
	I1222 01:43:07.071644 1685746 logs.go:282] 0 containers: []
	W1222 01:43:07.071653 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:43:07.071662 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:43:07.071724 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:43:07.100514 1685746 cri.go:96] found id: ""
	I1222 01:43:07.100539 1685746 logs.go:282] 0 containers: []
	W1222 01:43:07.100548 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:43:07.100555 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:43:07.100622 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:43:07.126740 1685746 cri.go:96] found id: ""
	I1222 01:43:07.126810 1685746 logs.go:282] 0 containers: []
	W1222 01:43:07.126833 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:43:07.126845 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:43:07.126921 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:43:07.156147 1685746 cri.go:96] found id: ""
	I1222 01:43:07.156174 1685746 logs.go:282] 0 containers: []
	W1222 01:43:07.156184 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:43:07.156190 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:43:07.156268 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:43:07.185551 1685746 cri.go:96] found id: ""
	I1222 01:43:07.185574 1685746 logs.go:282] 0 containers: []
	W1222 01:43:07.185583 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:43:07.185589 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:43:07.185670 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:43:07.210495 1685746 cri.go:96] found id: ""
	I1222 01:43:07.210563 1685746 logs.go:282] 0 containers: []
	W1222 01:43:07.210585 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:43:07.210608 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:43:07.210679 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:43:07.234671 1685746 cri.go:96] found id: ""
	I1222 01:43:07.234751 1685746 logs.go:282] 0 containers: []
	W1222 01:43:07.234775 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:43:07.234799 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:43:07.234847 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:43:07.318902 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:43:07.318936 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:43:07.334947 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:43:07.334977 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:43:07.400498 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:43:07.392484    9748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:07.392898    9748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:07.394668    9748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:07.395165    9748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:07.396785    9748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:43:07.392484    9748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:07.392898    9748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:07.394668    9748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:07.395165    9748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:07.396785    9748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:43:07.400520 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:43:07.400534 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:43:07.425576 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:43:07.425613 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:43:09.957230 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:43:09.968065 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:43:09.968142 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:43:09.993760 1685746 cri.go:96] found id: ""
	I1222 01:43:09.993785 1685746 logs.go:282] 0 containers: []
	W1222 01:43:09.993794 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:43:09.993802 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:43:09.993870 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:43:10.024110 1685746 cri.go:96] found id: ""
	I1222 01:43:10.024140 1685746 logs.go:282] 0 containers: []
	W1222 01:43:10.024151 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:43:10.024157 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:43:10.024232 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:43:10.053092 1685746 cri.go:96] found id: ""
	I1222 01:43:10.053122 1685746 logs.go:282] 0 containers: []
	W1222 01:43:10.053132 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:43:10.053138 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:43:10.053203 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:43:10.078967 1685746 cri.go:96] found id: ""
	I1222 01:43:10.078994 1685746 logs.go:282] 0 containers: []
	W1222 01:43:10.079004 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:43:10.079011 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:43:10.079079 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:43:10.105969 1685746 cri.go:96] found id: ""
	I1222 01:43:10.105993 1685746 logs.go:282] 0 containers: []
	W1222 01:43:10.106001 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:43:10.106008 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:43:10.106164 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:43:10.132413 1685746 cri.go:96] found id: ""
	I1222 01:43:10.132448 1685746 logs.go:282] 0 containers: []
	W1222 01:43:10.132457 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:43:10.132464 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:43:10.132526 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:43:10.158912 1685746 cri.go:96] found id: ""
	I1222 01:43:10.158941 1685746 logs.go:282] 0 containers: []
	W1222 01:43:10.158950 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:43:10.158957 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:43:10.159038 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:43:10.185594 1685746 cri.go:96] found id: ""
	I1222 01:43:10.185621 1685746 logs.go:282] 0 containers: []
	W1222 01:43:10.185630 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:43:10.185639 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:43:10.185681 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:43:10.214349 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:43:10.214378 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:43:10.274002 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:43:10.274096 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:43:10.289686 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:43:10.289761 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:43:10.375337 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:43:10.363816    9875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:10.364613    9875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:10.366668    9875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:10.368169    9875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:10.370480    9875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:43:10.363816    9875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:10.364613    9875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:10.366668    9875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:10.368169    9875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:10.370480    9875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:43:10.375413 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:43:10.375441 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:43:12.901196 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:43:12.911625 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:43:12.911710 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:43:12.936713 1685746 cri.go:96] found id: ""
	I1222 01:43:12.936738 1685746 logs.go:282] 0 containers: []
	W1222 01:43:12.936747 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:43:12.936753 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:43:12.936827 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:43:12.961849 1685746 cri.go:96] found id: ""
	I1222 01:43:12.961870 1685746 logs.go:282] 0 containers: []
	W1222 01:43:12.961879 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:43:12.961888 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:43:12.961950 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:43:12.990893 1685746 cri.go:96] found id: ""
	I1222 01:43:12.990919 1685746 logs.go:282] 0 containers: []
	W1222 01:43:12.990929 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:43:12.990935 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:43:12.990996 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:43:13.033584 1685746 cri.go:96] found id: ""
	I1222 01:43:13.033611 1685746 logs.go:282] 0 containers: []
	W1222 01:43:13.033621 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:43:13.033628 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:43:13.033691 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:43:13.062192 1685746 cri.go:96] found id: ""
	I1222 01:43:13.062216 1685746 logs.go:282] 0 containers: []
	W1222 01:43:13.062225 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:43:13.062232 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:43:13.062297 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:43:13.088173 1685746 cri.go:96] found id: ""
	I1222 01:43:13.088213 1685746 logs.go:282] 0 containers: []
	W1222 01:43:13.088223 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:43:13.088230 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:43:13.088312 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:43:13.115014 1685746 cri.go:96] found id: ""
	I1222 01:43:13.115051 1685746 logs.go:282] 0 containers: []
	W1222 01:43:13.115062 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:43:13.115069 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:43:13.115147 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:43:13.140656 1685746 cri.go:96] found id: ""
	I1222 01:43:13.140691 1685746 logs.go:282] 0 containers: []
	W1222 01:43:13.140700 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:43:13.140710 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:43:13.140722 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:43:13.177585 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:43:13.177660 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:43:13.233128 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:43:13.233162 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:43:13.251827 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:43:13.251907 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:43:13.360494 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:43:13.352114    9990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:13.352851    9990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:13.354529    9990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:13.355189    9990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:13.356849    9990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:43:13.352114    9990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:13.352851    9990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:13.354529    9990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:13.355189    9990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:13.356849    9990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:43:13.360570 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:43:13.360589 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:43:15.887876 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:43:15.898631 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:43:15.898708 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:43:15.923707 1685746 cri.go:96] found id: ""
	I1222 01:43:15.923732 1685746 logs.go:282] 0 containers: []
	W1222 01:43:15.923743 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:43:15.923750 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:43:15.923829 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:43:15.950453 1685746 cri.go:96] found id: ""
	I1222 01:43:15.950478 1685746 logs.go:282] 0 containers: []
	W1222 01:43:15.950492 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:43:15.950498 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:43:15.950612 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:43:15.975355 1685746 cri.go:96] found id: ""
	I1222 01:43:15.975436 1685746 logs.go:282] 0 containers: []
	W1222 01:43:15.975460 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:43:15.975475 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:43:15.975549 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:43:16.000992 1685746 cri.go:96] found id: ""
	I1222 01:43:16.001026 1685746 logs.go:282] 0 containers: []
	W1222 01:43:16.001036 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:43:16.001043 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:43:16.001134 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:43:16.033538 1685746 cri.go:96] found id: ""
	I1222 01:43:16.033563 1685746 logs.go:282] 0 containers: []
	W1222 01:43:16.033572 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:43:16.033578 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:43:16.033641 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:43:16.059451 1685746 cri.go:96] found id: ""
	I1222 01:43:16.059476 1685746 logs.go:282] 0 containers: []
	W1222 01:43:16.059486 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:43:16.059492 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:43:16.059556 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:43:16.085491 1685746 cri.go:96] found id: ""
	I1222 01:43:16.085515 1685746 logs.go:282] 0 containers: []
	W1222 01:43:16.085524 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:43:16.085530 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:43:16.085598 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:43:16.111197 1685746 cri.go:96] found id: ""
	I1222 01:43:16.111220 1685746 logs.go:282] 0 containers: []
	W1222 01:43:16.111228 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:43:16.111237 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:43:16.111249 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:43:16.167058 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:43:16.167095 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:43:16.182867 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:43:16.182947 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:43:16.303679 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:43:16.271313   10085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:16.272086   10085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:16.276862   10085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:16.277203   10085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:16.278713   10085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:43:16.271313   10085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:16.272086   10085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:16.276862   10085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:16.277203   10085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:16.278713   10085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:43:16.303753 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:43:16.303780 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:43:16.336416 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:43:16.336497 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:43:18.869703 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:43:18.880527 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:43:18.880602 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:43:18.906051 1685746 cri.go:96] found id: ""
	I1222 01:43:18.906102 1685746 logs.go:282] 0 containers: []
	W1222 01:43:18.906112 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:43:18.906119 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:43:18.906181 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:43:18.931999 1685746 cri.go:96] found id: ""
	I1222 01:43:18.932027 1685746 logs.go:282] 0 containers: []
	W1222 01:43:18.932036 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:43:18.932043 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:43:18.932110 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:43:18.959202 1685746 cri.go:96] found id: ""
	I1222 01:43:18.959230 1685746 logs.go:282] 0 containers: []
	W1222 01:43:18.959239 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:43:18.959246 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:43:18.959307 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:43:18.988050 1685746 cri.go:96] found id: ""
	I1222 01:43:18.988075 1685746 logs.go:282] 0 containers: []
	W1222 01:43:18.988084 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:43:18.988091 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:43:18.988179 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:43:19.014062 1685746 cri.go:96] found id: ""
	I1222 01:43:19.014116 1685746 logs.go:282] 0 containers: []
	W1222 01:43:19.014125 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:43:19.014132 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:43:19.014197 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:43:19.041419 1685746 cri.go:96] found id: ""
	I1222 01:43:19.041454 1685746 logs.go:282] 0 containers: []
	W1222 01:43:19.041464 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:43:19.041471 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:43:19.041548 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:43:19.067079 1685746 cri.go:96] found id: ""
	I1222 01:43:19.067114 1685746 logs.go:282] 0 containers: []
	W1222 01:43:19.067123 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:43:19.067130 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:43:19.067199 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:43:19.093005 1685746 cri.go:96] found id: ""
	I1222 01:43:19.093041 1685746 logs.go:282] 0 containers: []
	W1222 01:43:19.093050 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:43:19.093059 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:43:19.093070 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:43:19.148083 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:43:19.148119 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:43:19.163510 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:43:19.163547 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:43:19.228482 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:43:19.219765   10199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:19.220173   10199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:19.221836   10199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:19.222307   10199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:19.223917   10199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:43:19.219765   10199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:19.220173   10199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:19.221836   10199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:19.222307   10199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:19.223917   10199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:43:19.228505 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:43:19.228519 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:43:19.264345 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:43:19.264402 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:43:21.823213 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:43:21.834353 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:43:21.834427 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:43:21.860777 1685746 cri.go:96] found id: ""
	I1222 01:43:21.860805 1685746 logs.go:282] 0 containers: []
	W1222 01:43:21.860815 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:43:21.860823 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:43:21.860889 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:43:21.889075 1685746 cri.go:96] found id: ""
	I1222 01:43:21.889150 1685746 logs.go:282] 0 containers: []
	W1222 01:43:21.889173 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:43:21.889195 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:43:21.889284 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:43:21.915306 1685746 cri.go:96] found id: ""
	I1222 01:43:21.915334 1685746 logs.go:282] 0 containers: []
	W1222 01:43:21.915343 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:43:21.915349 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:43:21.915413 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:43:21.940239 1685746 cri.go:96] found id: ""
	I1222 01:43:21.940610 1685746 logs.go:282] 0 containers: []
	W1222 01:43:21.940624 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:43:21.940633 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:43:21.940694 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:43:21.966280 1685746 cri.go:96] found id: ""
	I1222 01:43:21.966307 1685746 logs.go:282] 0 containers: []
	W1222 01:43:21.966316 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:43:21.966323 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:43:21.966392 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:43:21.991888 1685746 cri.go:96] found id: ""
	I1222 01:43:21.991916 1685746 logs.go:282] 0 containers: []
	W1222 01:43:21.991925 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:43:21.991934 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:43:21.991993 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:43:22.021851 1685746 cri.go:96] found id: ""
	I1222 01:43:22.021878 1685746 logs.go:282] 0 containers: []
	W1222 01:43:22.021888 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:43:22.021895 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:43:22.021962 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:43:22.052435 1685746 cri.go:96] found id: ""
	I1222 01:43:22.052464 1685746 logs.go:282] 0 containers: []
	W1222 01:43:22.052473 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:43:22.052483 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:43:22.052495 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:43:22.128628 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:43:22.119151   10309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:22.120170   10309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:22.122258   10309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:22.122895   10309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:22.124723   10309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:43:22.119151   10309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:22.120170   10309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:22.122258   10309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:22.122895   10309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:22.124723   10309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:43:22.128653 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:43:22.128668 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:43:22.154140 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:43:22.154180 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:43:22.190762 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:43:22.190790 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:43:22.254223 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:43:22.254264 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:43:24.790679 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:43:24.801308 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:43:24.801380 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:43:24.826466 1685746 cri.go:96] found id: ""
	I1222 01:43:24.826492 1685746 logs.go:282] 0 containers: []
	W1222 01:43:24.826501 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:43:24.826508 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:43:24.826573 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:43:24.852169 1685746 cri.go:96] found id: ""
	I1222 01:43:24.852196 1685746 logs.go:282] 0 containers: []
	W1222 01:43:24.852206 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:43:24.852212 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:43:24.852277 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:43:24.876880 1685746 cri.go:96] found id: ""
	I1222 01:43:24.876906 1685746 logs.go:282] 0 containers: []
	W1222 01:43:24.876915 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:43:24.876922 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:43:24.876986 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:43:24.902741 1685746 cri.go:96] found id: ""
	I1222 01:43:24.902769 1685746 logs.go:282] 0 containers: []
	W1222 01:43:24.902778 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:43:24.902785 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:43:24.902851 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:43:24.928580 1685746 cri.go:96] found id: ""
	I1222 01:43:24.928603 1685746 logs.go:282] 0 containers: []
	W1222 01:43:24.928612 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:43:24.928618 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:43:24.928686 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:43:24.958505 1685746 cri.go:96] found id: ""
	I1222 01:43:24.958533 1685746 logs.go:282] 0 containers: []
	W1222 01:43:24.958542 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:43:24.958548 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:43:24.958610 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:43:24.988354 1685746 cri.go:96] found id: ""
	I1222 01:43:24.988394 1685746 logs.go:282] 0 containers: []
	W1222 01:43:24.988403 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:43:24.988410 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:43:24.988471 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:43:25.022402 1685746 cri.go:96] found id: ""
	I1222 01:43:25.022445 1685746 logs.go:282] 0 containers: []
	W1222 01:43:25.022455 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:43:25.022465 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:43:25.022477 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:43:25.090031 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:43:25.080604   10424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:25.081363   10424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:25.083352   10424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:25.084129   10424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:25.086042   10424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:43:25.080604   10424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:25.081363   10424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:25.083352   10424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:25.084129   10424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:25.086042   10424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:43:25.090122 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:43:25.090152 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:43:25.117050 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:43:25.117090 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:43:25.146413 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:43:25.146443 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:43:25.203377 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:43:25.203415 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:43:27.718901 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:43:27.729888 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:43:27.729962 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:43:27.753619 1685746 cri.go:96] found id: ""
	I1222 01:43:27.753643 1685746 logs.go:282] 0 containers: []
	W1222 01:43:27.753651 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:43:27.753657 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:43:27.753734 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:43:27.778439 1685746 cri.go:96] found id: ""
	I1222 01:43:27.778468 1685746 logs.go:282] 0 containers: []
	W1222 01:43:27.778477 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:43:27.778484 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:43:27.778549 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:43:27.803747 1685746 cri.go:96] found id: ""
	I1222 01:43:27.803776 1685746 logs.go:282] 0 containers: []
	W1222 01:43:27.803786 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:43:27.803792 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:43:27.803851 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:43:27.833272 1685746 cri.go:96] found id: ""
	I1222 01:43:27.833295 1685746 logs.go:282] 0 containers: []
	W1222 01:43:27.833303 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:43:27.833310 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:43:27.833383 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:43:27.858574 1685746 cri.go:96] found id: ""
	I1222 01:43:27.858602 1685746 logs.go:282] 0 containers: []
	W1222 01:43:27.858613 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:43:27.858619 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:43:27.858680 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:43:27.884333 1685746 cri.go:96] found id: ""
	I1222 01:43:27.884361 1685746 logs.go:282] 0 containers: []
	W1222 01:43:27.884418 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:43:27.884434 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:43:27.884509 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:43:27.914000 1685746 cri.go:96] found id: ""
	I1222 01:43:27.914111 1685746 logs.go:282] 0 containers: []
	W1222 01:43:27.914145 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:43:27.914159 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:43:27.914221 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:43:27.939204 1685746 cri.go:96] found id: ""
	I1222 01:43:27.939228 1685746 logs.go:282] 0 containers: []
	W1222 01:43:27.939237 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:43:27.939246 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:43:27.939257 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:43:27.953702 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:43:27.953728 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:43:28.021111 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:43:28.012570   10545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:28.013161   10545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:28.014852   10545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:28.015378   10545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:28.017163   10545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:43:28.012570   10545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:28.013161   10545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:28.014852   10545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:28.015378   10545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:28.017163   10545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:43:28.021131 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:43:28.021144 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:43:28.048052 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:43:28.048090 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:43:28.080739 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:43:28.080776 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:43:30.641402 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:43:30.652837 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:43:30.652908 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:43:30.679700 1685746 cri.go:96] found id: ""
	I1222 01:43:30.679727 1685746 logs.go:282] 0 containers: []
	W1222 01:43:30.679736 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:43:30.679743 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:43:30.679872 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:43:30.708517 1685746 cri.go:96] found id: ""
	I1222 01:43:30.708545 1685746 logs.go:282] 0 containers: []
	W1222 01:43:30.708554 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:43:30.708561 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:43:30.708622 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:43:30.737801 1685746 cri.go:96] found id: ""
	I1222 01:43:30.737829 1685746 logs.go:282] 0 containers: []
	W1222 01:43:30.737838 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:43:30.737845 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:43:30.737916 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:43:30.764096 1685746 cri.go:96] found id: ""
	I1222 01:43:30.764124 1685746 logs.go:282] 0 containers: []
	W1222 01:43:30.764134 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:43:30.764141 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:43:30.764252 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:43:30.789565 1685746 cri.go:96] found id: ""
	I1222 01:43:30.789591 1685746 logs.go:282] 0 containers: []
	W1222 01:43:30.789599 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:43:30.789607 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:43:30.789684 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:43:30.822764 1685746 cri.go:96] found id: ""
	I1222 01:43:30.822833 1685746 logs.go:282] 0 containers: []
	W1222 01:43:30.822857 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:43:30.822871 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:43:30.822957 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:43:30.848727 1685746 cri.go:96] found id: ""
	I1222 01:43:30.848754 1685746 logs.go:282] 0 containers: []
	W1222 01:43:30.848763 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:43:30.848770 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:43:30.848830 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:43:30.876920 1685746 cri.go:96] found id: ""
	I1222 01:43:30.876945 1685746 logs.go:282] 0 containers: []
	W1222 01:43:30.876954 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:43:30.876963 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:43:30.876974 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:43:30.932977 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:43:30.933015 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:43:30.950177 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:43:30.950205 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:43:31.021720 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:43:31.012613   10658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:31.013393   10658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:31.015106   10658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:31.015679   10658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:31.017436   10658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:43:31.012613   10658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:31.013393   10658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:31.015106   10658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:31.015679   10658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:31.017436   10658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:43:31.021745 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:43:31.021757 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:43:31.047873 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:43:31.047908 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:43:33.582285 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:43:33.593589 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:43:33.593677 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:43:33.619720 1685746 cri.go:96] found id: ""
	I1222 01:43:33.619746 1685746 logs.go:282] 0 containers: []
	W1222 01:43:33.619755 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:43:33.619762 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:43:33.619823 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:43:33.644535 1685746 cri.go:96] found id: ""
	I1222 01:43:33.644558 1685746 logs.go:282] 0 containers: []
	W1222 01:43:33.644567 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:43:33.644573 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:43:33.644636 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:43:33.674069 1685746 cri.go:96] found id: ""
	I1222 01:43:33.674133 1685746 logs.go:282] 0 containers: []
	W1222 01:43:33.674144 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:43:33.674151 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:43:33.674216 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:43:33.700076 1685746 cri.go:96] found id: ""
	I1222 01:43:33.700102 1685746 logs.go:282] 0 containers: []
	W1222 01:43:33.700111 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:43:33.700118 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:43:33.700179 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:43:33.725155 1685746 cri.go:96] found id: ""
	I1222 01:43:33.725182 1685746 logs.go:282] 0 containers: []
	W1222 01:43:33.725192 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:43:33.725199 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:43:33.725259 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:43:33.752045 1685746 cri.go:96] found id: ""
	I1222 01:43:33.752120 1685746 logs.go:282] 0 containers: []
	W1222 01:43:33.752144 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:43:33.752166 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:43:33.752270 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:43:33.776869 1685746 cri.go:96] found id: ""
	I1222 01:43:33.776897 1685746 logs.go:282] 0 containers: []
	W1222 01:43:33.776917 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:43:33.776925 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:43:33.776995 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:43:33.804537 1685746 cri.go:96] found id: ""
	I1222 01:43:33.804559 1685746 logs.go:282] 0 containers: []
	W1222 01:43:33.804568 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:43:33.804577 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:43:33.804589 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:43:33.868017 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:43:33.859678   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:33.860434   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:33.862123   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:33.862611   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:33.864178   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:43:33.859678   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:33.860434   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:33.862123   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:33.862611   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:33.864178   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:43:33.868038 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:43:33.868050 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:43:33.893225 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:43:33.893268 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:43:33.925850 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:43:33.925880 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:43:33.984794 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:43:33.984827 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:43:36.500237 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:43:36.517959 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:43:36.518035 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:43:36.566551 1685746 cri.go:96] found id: ""
	I1222 01:43:36.566578 1685746 logs.go:282] 0 containers: []
	W1222 01:43:36.566587 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:43:36.566594 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:43:36.566675 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:43:36.601952 1685746 cri.go:96] found id: ""
	I1222 01:43:36.601979 1685746 logs.go:282] 0 containers: []
	W1222 01:43:36.601988 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:43:36.601994 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:43:36.602069 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:43:36.628093 1685746 cri.go:96] found id: ""
	I1222 01:43:36.628123 1685746 logs.go:282] 0 containers: []
	W1222 01:43:36.628132 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:43:36.628138 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:43:36.628199 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:43:36.653428 1685746 cri.go:96] found id: ""
	I1222 01:43:36.653457 1685746 logs.go:282] 0 containers: []
	W1222 01:43:36.653471 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:43:36.653478 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:43:36.653536 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:43:36.680092 1685746 cri.go:96] found id: ""
	I1222 01:43:36.680115 1685746 logs.go:282] 0 containers: []
	W1222 01:43:36.680124 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:43:36.680130 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:43:36.680189 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:43:36.706982 1685746 cri.go:96] found id: ""
	I1222 01:43:36.707020 1685746 logs.go:282] 0 containers: []
	W1222 01:43:36.707030 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:43:36.707037 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:43:36.707112 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:43:36.731661 1685746 cri.go:96] found id: ""
	I1222 01:43:36.731738 1685746 logs.go:282] 0 containers: []
	W1222 01:43:36.731760 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:43:36.731783 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:43:36.731878 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:43:36.759936 1685746 cri.go:96] found id: ""
	I1222 01:43:36.759958 1685746 logs.go:282] 0 containers: []
	W1222 01:43:36.759966 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:43:36.759975 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:43:36.759986 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:43:36.774574 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:43:36.774601 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:43:36.840390 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:43:36.831270   10885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:36.832097   10885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:36.833923   10885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:36.834552   10885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:36.836202   10885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:43:36.831270   10885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:36.832097   10885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:36.833923   10885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:36.834552   10885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:36.836202   10885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:43:36.840453 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:43:36.840474 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:43:36.865823 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:43:36.865861 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:43:36.895884 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:43:36.895914 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:43:39.451426 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:43:39.462101 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:43:39.462175 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:43:39.492238 1685746 cri.go:96] found id: ""
	I1222 01:43:39.492261 1685746 logs.go:282] 0 containers: []
	W1222 01:43:39.492270 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:43:39.492281 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:43:39.492355 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:43:39.573214 1685746 cri.go:96] found id: ""
	I1222 01:43:39.573236 1685746 logs.go:282] 0 containers: []
	W1222 01:43:39.573244 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:43:39.573251 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:43:39.573323 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:43:39.599147 1685746 cri.go:96] found id: ""
	I1222 01:43:39.599172 1685746 logs.go:282] 0 containers: []
	W1222 01:43:39.599181 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:43:39.599188 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:43:39.599251 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:43:39.624765 1685746 cri.go:96] found id: ""
	I1222 01:43:39.624850 1685746 logs.go:282] 0 containers: []
	W1222 01:43:39.624874 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:43:39.624915 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:43:39.625014 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:43:39.656217 1685746 cri.go:96] found id: ""
	I1222 01:43:39.656244 1685746 logs.go:282] 0 containers: []
	W1222 01:43:39.656253 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:43:39.656260 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:43:39.656349 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:43:39.682103 1685746 cri.go:96] found id: ""
	I1222 01:43:39.682127 1685746 logs.go:282] 0 containers: []
	W1222 01:43:39.682136 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:43:39.682143 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:43:39.682211 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:43:39.707971 1685746 cri.go:96] found id: ""
	I1222 01:43:39.707999 1685746 logs.go:282] 0 containers: []
	W1222 01:43:39.708008 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:43:39.708015 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:43:39.708075 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:43:39.737148 1685746 cri.go:96] found id: ""
	I1222 01:43:39.737175 1685746 logs.go:282] 0 containers: []
	W1222 01:43:39.737184 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:43:39.737194 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:43:39.737210 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:43:39.805404 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:43:39.797552   10995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:39.798323   10995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:39.799828   10995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:39.800272   10995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:39.801768   10995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:43:39.797552   10995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:39.798323   10995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:39.799828   10995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:39.800272   10995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:39.801768   10995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:43:39.805427 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:43:39.805441 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:43:39.835140 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:43:39.835180 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:43:39.864203 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:43:39.864232 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:43:39.919399 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:43:39.919435 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:43:42.434907 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:43:42.447524 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:43:42.447601 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:43:42.474430 1685746 cri.go:96] found id: ""
	I1222 01:43:42.474452 1685746 logs.go:282] 0 containers: []
	W1222 01:43:42.474468 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:43:42.474475 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:43:42.474534 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:43:42.539132 1685746 cri.go:96] found id: ""
	I1222 01:43:42.539154 1685746 logs.go:282] 0 containers: []
	W1222 01:43:42.539178 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:43:42.539186 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:43:42.539287 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:43:42.575001 1685746 cri.go:96] found id: ""
	I1222 01:43:42.575023 1685746 logs.go:282] 0 containers: []
	W1222 01:43:42.575031 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:43:42.575037 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:43:42.575095 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:43:42.599923 1685746 cri.go:96] found id: ""
	I1222 01:43:42.599947 1685746 logs.go:282] 0 containers: []
	W1222 01:43:42.599956 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:43:42.599963 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:43:42.600027 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:43:42.624602 1685746 cri.go:96] found id: ""
	I1222 01:43:42.624630 1685746 logs.go:282] 0 containers: []
	W1222 01:43:42.624640 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:43:42.624646 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:43:42.624707 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:43:42.649899 1685746 cri.go:96] found id: ""
	I1222 01:43:42.649925 1685746 logs.go:282] 0 containers: []
	W1222 01:43:42.649934 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:43:42.649941 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:43:42.650001 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:43:42.675756 1685746 cri.go:96] found id: ""
	I1222 01:43:42.675836 1685746 logs.go:282] 0 containers: []
	W1222 01:43:42.675860 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:43:42.675897 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:43:42.675973 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:43:42.702958 1685746 cri.go:96] found id: ""
	I1222 01:43:42.702995 1685746 logs.go:282] 0 containers: []
	W1222 01:43:42.703005 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:43:42.703014 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:43:42.703025 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:43:42.759487 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:43:42.759526 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:43:42.774803 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:43:42.774835 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:43:42.841752 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:43:42.833129   11114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:42.833574   11114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:42.835289   11114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:42.835969   11114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:42.837683   11114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:43:42.833129   11114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:42.833574   11114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:42.835289   11114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:42.835969   11114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:42.837683   11114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:43:42.841776 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:43:42.841790 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:43:42.868632 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:43:42.868666 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:43:45.400104 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:43:45.410950 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:43:45.411071 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:43:45.436920 1685746 cri.go:96] found id: ""
	I1222 01:43:45.436957 1685746 logs.go:282] 0 containers: []
	W1222 01:43:45.436966 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:43:45.436973 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:43:45.437044 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:43:45.464719 1685746 cri.go:96] found id: ""
	I1222 01:43:45.464755 1685746 logs.go:282] 0 containers: []
	W1222 01:43:45.464765 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:43:45.464771 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:43:45.464841 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:43:45.501180 1685746 cri.go:96] found id: ""
	I1222 01:43:45.501207 1685746 logs.go:282] 0 containers: []
	W1222 01:43:45.501226 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:43:45.501234 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:43:45.501305 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:43:45.547294 1685746 cri.go:96] found id: ""
	I1222 01:43:45.547339 1685746 logs.go:282] 0 containers: []
	W1222 01:43:45.547350 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:43:45.547357 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:43:45.547435 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:43:45.581484 1685746 cri.go:96] found id: ""
	I1222 01:43:45.581526 1685746 logs.go:282] 0 containers: []
	W1222 01:43:45.581535 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:43:45.581542 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:43:45.581613 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:43:45.610563 1685746 cri.go:96] found id: ""
	I1222 01:43:45.610591 1685746 logs.go:282] 0 containers: []
	W1222 01:43:45.610600 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:43:45.610607 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:43:45.610679 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:43:45.637028 1685746 cri.go:96] found id: ""
	I1222 01:43:45.637054 1685746 logs.go:282] 0 containers: []
	W1222 01:43:45.637064 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:43:45.637070 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:43:45.637141 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:43:45.662660 1685746 cri.go:96] found id: ""
	I1222 01:43:45.662740 1685746 logs.go:282] 0 containers: []
	W1222 01:43:45.662756 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:43:45.662767 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:43:45.662779 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:43:45.719167 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:43:45.719208 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:43:45.734405 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:43:45.734438 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:43:45.802645 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:43:45.794241   11226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:45.795082   11226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:45.796666   11226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:45.797286   11226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:45.798945   11226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:43:45.794241   11226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:45.795082   11226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:45.796666   11226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:45.797286   11226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:45.798945   11226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:43:45.802667 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:43:45.802680 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:43:45.829402 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:43:45.829439 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:43:48.362229 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:43:48.372648 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:43:48.372722 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:43:48.399816 1685746 cri.go:96] found id: ""
	I1222 01:43:48.399843 1685746 logs.go:282] 0 containers: []
	W1222 01:43:48.399852 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:43:48.399859 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:43:48.399922 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:43:48.424774 1685746 cri.go:96] found id: ""
	I1222 01:43:48.424800 1685746 logs.go:282] 0 containers: []
	W1222 01:43:48.424809 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:43:48.424816 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:43:48.424873 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:43:48.449402 1685746 cri.go:96] found id: ""
	I1222 01:43:48.449429 1685746 logs.go:282] 0 containers: []
	W1222 01:43:48.449438 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:43:48.449444 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:43:48.449501 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:43:48.481785 1685746 cri.go:96] found id: ""
	I1222 01:43:48.481811 1685746 logs.go:282] 0 containers: []
	W1222 01:43:48.481822 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:43:48.481828 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:43:48.481884 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:43:48.535392 1685746 cri.go:96] found id: ""
	I1222 01:43:48.535421 1685746 logs.go:282] 0 containers: []
	W1222 01:43:48.535429 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:43:48.535435 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:43:48.535495 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:43:48.581091 1685746 cri.go:96] found id: ""
	I1222 01:43:48.581119 1685746 logs.go:282] 0 containers: []
	W1222 01:43:48.581128 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:43:48.581135 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:43:48.581195 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:43:48.608115 1685746 cri.go:96] found id: ""
	I1222 01:43:48.608143 1685746 logs.go:282] 0 containers: []
	W1222 01:43:48.608152 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:43:48.608158 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:43:48.608222 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:43:48.634982 1685746 cri.go:96] found id: ""
	I1222 01:43:48.635007 1685746 logs.go:282] 0 containers: []
	W1222 01:43:48.635015 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:43:48.635024 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:43:48.635040 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:43:48.690980 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:43:48.691017 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:43:48.706101 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:43:48.706126 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:43:48.773880 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:43:48.764854   11337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:48.765671   11337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:48.767568   11337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:48.768241   11337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:48.769856   11337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:43:48.764854   11337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:48.765671   11337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:48.767568   11337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:48.768241   11337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:48.769856   11337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:43:48.773903 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:43:48.773915 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:43:48.798770 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:43:48.798805 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:43:51.326747 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:43:51.337244 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:43:51.337316 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:43:51.361650 1685746 cri.go:96] found id: ""
	I1222 01:43:51.361674 1685746 logs.go:282] 0 containers: []
	W1222 01:43:51.361685 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:43:51.361691 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:43:51.361752 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:43:51.387243 1685746 cri.go:96] found id: ""
	I1222 01:43:51.387267 1685746 logs.go:282] 0 containers: []
	W1222 01:43:51.387275 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:43:51.387282 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:43:51.387339 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:43:51.412051 1685746 cri.go:96] found id: ""
	I1222 01:43:51.412076 1685746 logs.go:282] 0 containers: []
	W1222 01:43:51.412085 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:43:51.412091 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:43:51.412152 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:43:51.442828 1685746 cri.go:96] found id: ""
	I1222 01:43:51.442855 1685746 logs.go:282] 0 containers: []
	W1222 01:43:51.442864 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:43:51.442871 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:43:51.442931 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:43:51.469084 1685746 cri.go:96] found id: ""
	I1222 01:43:51.469111 1685746 logs.go:282] 0 containers: []
	W1222 01:43:51.469120 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:43:51.469128 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:43:51.469196 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:43:51.505900 1685746 cri.go:96] found id: ""
	I1222 01:43:51.505931 1685746 logs.go:282] 0 containers: []
	W1222 01:43:51.505940 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:43:51.505947 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:43:51.506015 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:43:51.544756 1685746 cri.go:96] found id: ""
	I1222 01:43:51.544794 1685746 logs.go:282] 0 containers: []
	W1222 01:43:51.544803 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:43:51.544810 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:43:51.544881 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:43:51.595192 1685746 cri.go:96] found id: ""
	I1222 01:43:51.595274 1685746 logs.go:282] 0 containers: []
	W1222 01:43:51.595308 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:43:51.595330 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:43:51.595370 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:43:51.651780 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:43:51.651815 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:43:51.666583 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:43:51.666611 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:43:51.736962 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:43:51.728914   11451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:51.729300   11451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:51.730869   11451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:51.731559   11451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:51.732987   11451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:43:51.728914   11451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:51.729300   11451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:51.730869   11451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:51.731559   11451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:51.732987   11451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:43:51.736984 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:43:51.736997 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:43:51.763237 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:43:51.763272 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:43:54.292529 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:43:54.303313 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:43:54.303393 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:43:54.329228 1685746 cri.go:96] found id: ""
	I1222 01:43:54.329251 1685746 logs.go:282] 0 containers: []
	W1222 01:43:54.329260 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:43:54.329266 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:43:54.329325 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:43:54.353443 1685746 cri.go:96] found id: ""
	I1222 01:43:54.353478 1685746 logs.go:282] 0 containers: []
	W1222 01:43:54.353488 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:43:54.353495 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:43:54.353565 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:43:54.385463 1685746 cri.go:96] found id: ""
	I1222 01:43:54.385487 1685746 logs.go:282] 0 containers: []
	W1222 01:43:54.385496 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:43:54.385502 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:43:54.385571 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:43:54.413065 1685746 cri.go:96] found id: ""
	I1222 01:43:54.413135 1685746 logs.go:282] 0 containers: []
	W1222 01:43:54.413160 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:43:54.413209 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:43:54.413290 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:43:54.440350 1685746 cri.go:96] found id: ""
	I1222 01:43:54.440376 1685746 logs.go:282] 0 containers: []
	W1222 01:43:54.440385 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:43:54.440391 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:43:54.440469 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:43:54.469549 1685746 cri.go:96] found id: ""
	I1222 01:43:54.469583 1685746 logs.go:282] 0 containers: []
	W1222 01:43:54.469592 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:43:54.469599 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:43:54.469668 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:43:54.514637 1685746 cri.go:96] found id: ""
	I1222 01:43:54.514714 1685746 logs.go:282] 0 containers: []
	W1222 01:43:54.514738 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:43:54.514761 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:43:54.514876 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:43:54.546685 1685746 cri.go:96] found id: ""
	I1222 01:43:54.546708 1685746 logs.go:282] 0 containers: []
	W1222 01:43:54.546717 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:43:54.546726 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:43:54.546737 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:43:54.576240 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:43:54.576324 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:43:54.618824 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:43:54.618853 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:43:54.673867 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:43:54.673900 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:43:54.689028 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:43:54.689057 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:43:54.755999 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:43:54.747746   11573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:54.748476   11573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:54.750175   11573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:54.750710   11573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:54.752333   11573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:43:54.747746   11573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:54.748476   11573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:54.750175   11573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:54.750710   11573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:54.752333   11573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:43:57.257146 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:43:57.268025 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:43:57.268100 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:43:57.292709 1685746 cri.go:96] found id: ""
	I1222 01:43:57.292738 1685746 logs.go:282] 0 containers: []
	W1222 01:43:57.292748 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:43:57.292761 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:43:57.292826 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:43:57.321159 1685746 cri.go:96] found id: ""
	I1222 01:43:57.321186 1685746 logs.go:282] 0 containers: []
	W1222 01:43:57.321195 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:43:57.321201 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:43:57.321264 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:43:57.350573 1685746 cri.go:96] found id: ""
	I1222 01:43:57.350601 1685746 logs.go:282] 0 containers: []
	W1222 01:43:57.350611 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:43:57.350620 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:43:57.350682 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:43:57.380391 1685746 cri.go:96] found id: ""
	I1222 01:43:57.380425 1685746 logs.go:282] 0 containers: []
	W1222 01:43:57.380435 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:43:57.380441 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:43:57.380502 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:43:57.404977 1685746 cri.go:96] found id: ""
	I1222 01:43:57.405003 1685746 logs.go:282] 0 containers: []
	W1222 01:43:57.405012 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:43:57.405018 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:43:57.405080 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:43:57.431206 1685746 cri.go:96] found id: ""
	I1222 01:43:57.431234 1685746 logs.go:282] 0 containers: []
	W1222 01:43:57.431243 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:43:57.431250 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:43:57.431310 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:43:57.458352 1685746 cri.go:96] found id: ""
	I1222 01:43:57.458378 1685746 logs.go:282] 0 containers: []
	W1222 01:43:57.458387 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:43:57.458393 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:43:57.458454 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:43:57.487672 1685746 cri.go:96] found id: ""
	I1222 01:43:57.487700 1685746 logs.go:282] 0 containers: []
	W1222 01:43:57.487709 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:43:57.487718 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:43:57.487729 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:43:57.523843 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:43:57.523925 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:43:57.589400 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:43:57.589476 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:43:57.650987 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:43:57.651025 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:43:57.666115 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:43:57.666151 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:43:57.735484 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:43:57.726997   11685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:57.727531   11685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:57.729337   11685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:57.729718   11685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:57.731296   11685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:43:57.726997   11685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:57.727531   11685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:57.729337   11685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:57.729718   11685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:57.731296   11685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:44:00.237195 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:44:00.303116 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:44:00.303238 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:44:00.349571 1685746 cri.go:96] found id: ""
	I1222 01:44:00.349604 1685746 logs.go:282] 0 containers: []
	W1222 01:44:00.349614 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:44:00.349623 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:44:00.349691 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:44:00.397703 1685746 cri.go:96] found id: ""
	I1222 01:44:00.397728 1685746 logs.go:282] 0 containers: []
	W1222 01:44:00.397757 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:44:00.397772 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:44:00.397869 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:44:00.445846 1685746 cri.go:96] found id: ""
	I1222 01:44:00.445883 1685746 logs.go:282] 0 containers: []
	W1222 01:44:00.445891 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:44:00.445899 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:44:00.445975 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:44:00.481389 1685746 cri.go:96] found id: ""
	I1222 01:44:00.481433 1685746 logs.go:282] 0 containers: []
	W1222 01:44:00.481443 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:44:00.481451 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:44:00.481545 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:44:00.555281 1685746 cri.go:96] found id: ""
	I1222 01:44:00.555323 1685746 logs.go:282] 0 containers: []
	W1222 01:44:00.555333 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:44:00.555339 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:44:00.555417 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:44:00.610523 1685746 cri.go:96] found id: ""
	I1222 01:44:00.610554 1685746 logs.go:282] 0 containers: []
	W1222 01:44:00.610565 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:44:00.610572 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:44:00.610639 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:44:00.640211 1685746 cri.go:96] found id: ""
	I1222 01:44:00.640242 1685746 logs.go:282] 0 containers: []
	W1222 01:44:00.640252 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:44:00.640261 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:44:00.640334 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:44:00.672011 1685746 cri.go:96] found id: ""
	I1222 01:44:00.672037 1685746 logs.go:282] 0 containers: []
	W1222 01:44:00.672046 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:44:00.672055 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:44:00.672067 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:44:00.730908 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:44:00.730946 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:44:00.746205 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:44:00.746280 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:44:00.814946 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:44:00.806290   11785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:00.807084   11785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:00.808801   11785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:00.809407   11785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:00.810977   11785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:44:00.806290   11785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:00.807084   11785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:00.808801   11785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:00.809407   11785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:00.810977   11785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:44:00.814969 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:44:00.814982 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:44:00.841341 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:44:00.841376 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:44:03.372817 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:44:03.383361 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:44:03.383438 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:44:03.407536 1685746 cri.go:96] found id: ""
	I1222 01:44:03.407558 1685746 logs.go:282] 0 containers: []
	W1222 01:44:03.407566 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:44:03.407572 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:44:03.407631 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:44:03.433092 1685746 cri.go:96] found id: ""
	I1222 01:44:03.433120 1685746 logs.go:282] 0 containers: []
	W1222 01:44:03.433129 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:44:03.433135 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:44:03.433193 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:44:03.462721 1685746 cri.go:96] found id: ""
	I1222 01:44:03.462750 1685746 logs.go:282] 0 containers: []
	W1222 01:44:03.462759 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:44:03.462765 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:44:03.462824 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:44:03.512849 1685746 cri.go:96] found id: ""
	I1222 01:44:03.512871 1685746 logs.go:282] 0 containers: []
	W1222 01:44:03.512880 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:44:03.512887 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:44:03.512946 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:44:03.574191 1685746 cri.go:96] found id: ""
	I1222 01:44:03.574217 1685746 logs.go:282] 0 containers: []
	W1222 01:44:03.574226 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:44:03.574232 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:44:03.574299 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:44:03.600756 1685746 cri.go:96] found id: ""
	I1222 01:44:03.600785 1685746 logs.go:282] 0 containers: []
	W1222 01:44:03.600794 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:44:03.600801 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:44:03.600865 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:44:03.627524 1685746 cri.go:96] found id: ""
	I1222 01:44:03.627554 1685746 logs.go:282] 0 containers: []
	W1222 01:44:03.627564 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:44:03.627571 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:44:03.627632 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:44:03.652207 1685746 cri.go:96] found id: ""
	I1222 01:44:03.652230 1685746 logs.go:282] 0 containers: []
	W1222 01:44:03.652239 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:44:03.652248 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:44:03.652258 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:44:03.710392 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:44:03.710427 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:44:03.725850 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:44:03.725877 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:44:03.793641 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:44:03.785409   11901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:03.785944   11901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:03.787476   11901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:03.787975   11901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:03.789432   11901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:44:03.785409   11901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:03.785944   11901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:03.787476   11901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:03.787975   11901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:03.789432   11901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:44:03.793708 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:44:03.793725 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:44:03.819086 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:44:03.819122 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:44:06.350666 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:44:06.361704 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:44:06.361772 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:44:06.387959 1685746 cri.go:96] found id: ""
	I1222 01:44:06.387985 1685746 logs.go:282] 0 containers: []
	W1222 01:44:06.387994 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:44:06.388001 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:44:06.388063 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:44:06.420195 1685746 cri.go:96] found id: ""
	I1222 01:44:06.420229 1685746 logs.go:282] 0 containers: []
	W1222 01:44:06.420239 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:44:06.420245 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:44:06.420318 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:44:06.444201 1685746 cri.go:96] found id: ""
	I1222 01:44:06.444228 1685746 logs.go:282] 0 containers: []
	W1222 01:44:06.444237 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:44:06.444244 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:44:06.444326 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:44:06.469606 1685746 cri.go:96] found id: ""
	I1222 01:44:06.469635 1685746 logs.go:282] 0 containers: []
	W1222 01:44:06.469644 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:44:06.469650 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:44:06.469714 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:44:06.516673 1685746 cri.go:96] found id: ""
	I1222 01:44:06.516703 1685746 logs.go:282] 0 containers: []
	W1222 01:44:06.516712 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:44:06.516719 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:44:06.516783 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:44:06.554976 1685746 cri.go:96] found id: ""
	I1222 01:44:06.555004 1685746 logs.go:282] 0 containers: []
	W1222 01:44:06.555014 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:44:06.555020 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:44:06.555079 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:44:06.587406 1685746 cri.go:96] found id: ""
	I1222 01:44:06.587434 1685746 logs.go:282] 0 containers: []
	W1222 01:44:06.587443 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:44:06.587449 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:44:06.587511 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:44:06.620595 1685746 cri.go:96] found id: ""
	I1222 01:44:06.620623 1685746 logs.go:282] 0 containers: []
	W1222 01:44:06.620633 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:44:06.620642 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:44:06.620655 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:44:06.677532 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:44:06.677567 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:44:06.692910 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:44:06.692987 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:44:06.760398 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:44:06.750827   12014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:06.751480   12014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:06.753963   12014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:06.754997   12014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:06.755795   12014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:44:06.750827   12014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:06.751480   12014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:06.753963   12014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:06.754997   12014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:06.755795   12014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:44:06.760423 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:44:06.760436 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:44:06.785709 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:44:06.785743 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:44:09.314372 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:44:09.325259 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:44:09.325349 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:44:09.350687 1685746 cri.go:96] found id: ""
	I1222 01:44:09.350712 1685746 logs.go:282] 0 containers: []
	W1222 01:44:09.350726 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:44:09.350733 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:44:09.350794 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:44:09.376225 1685746 cri.go:96] found id: ""
	I1222 01:44:09.376252 1685746 logs.go:282] 0 containers: []
	W1222 01:44:09.376260 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:44:09.376267 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:44:09.376332 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:44:09.402898 1685746 cri.go:96] found id: ""
	I1222 01:44:09.402922 1685746 logs.go:282] 0 containers: []
	W1222 01:44:09.402931 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:44:09.402937 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:44:09.403008 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:44:09.428038 1685746 cri.go:96] found id: ""
	I1222 01:44:09.428066 1685746 logs.go:282] 0 containers: []
	W1222 01:44:09.428075 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:44:09.428082 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:44:09.428150 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:44:09.456772 1685746 cri.go:96] found id: ""
	I1222 01:44:09.456798 1685746 logs.go:282] 0 containers: []
	W1222 01:44:09.456806 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:44:09.456813 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:44:09.456871 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:44:09.484926 1685746 cri.go:96] found id: ""
	I1222 01:44:09.484953 1685746 logs.go:282] 0 containers: []
	W1222 01:44:09.484962 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:44:09.484968 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:44:09.485029 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:44:09.521247 1685746 cri.go:96] found id: ""
	I1222 01:44:09.521276 1685746 logs.go:282] 0 containers: []
	W1222 01:44:09.521285 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:44:09.521292 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:44:09.521361 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:44:09.559247 1685746 cri.go:96] found id: ""
	I1222 01:44:09.559283 1685746 logs.go:282] 0 containers: []
	W1222 01:44:09.559292 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:44:09.559301 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:44:09.559313 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:44:09.576452 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:44:09.576488 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:44:09.647498 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:44:09.639570   12128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:09.640313   12128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:09.641853   12128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:09.642396   12128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:09.643920   12128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:44:09.639570   12128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:09.640313   12128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:09.641853   12128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:09.642396   12128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:09.643920   12128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:44:09.647522 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:44:09.647535 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:44:09.672763 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:44:09.672799 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:44:09.703339 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:44:09.703367 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:44:12.258428 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:44:12.269740 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:44:12.269827 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:44:12.295142 1685746 cri.go:96] found id: ""
	I1222 01:44:12.295166 1685746 logs.go:282] 0 containers: []
	W1222 01:44:12.295174 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:44:12.295181 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:44:12.295239 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:44:12.324426 1685746 cri.go:96] found id: ""
	I1222 01:44:12.324453 1685746 logs.go:282] 0 containers: []
	W1222 01:44:12.324462 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:44:12.324468 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:44:12.324528 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:44:12.352908 1685746 cri.go:96] found id: ""
	I1222 01:44:12.352936 1685746 logs.go:282] 0 containers: []
	W1222 01:44:12.352945 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:44:12.352952 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:44:12.353016 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:44:12.382056 1685746 cri.go:96] found id: ""
	I1222 01:44:12.382106 1685746 logs.go:282] 0 containers: []
	W1222 01:44:12.382115 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:44:12.382122 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:44:12.382184 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:44:12.405895 1685746 cri.go:96] found id: ""
	I1222 01:44:12.405926 1685746 logs.go:282] 0 containers: []
	W1222 01:44:12.405935 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:44:12.405941 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:44:12.406063 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:44:12.432020 1685746 cri.go:96] found id: ""
	I1222 01:44:12.432046 1685746 logs.go:282] 0 containers: []
	W1222 01:44:12.432055 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:44:12.432062 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:44:12.432167 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:44:12.460268 1685746 cri.go:96] found id: ""
	I1222 01:44:12.460316 1685746 logs.go:282] 0 containers: []
	W1222 01:44:12.460325 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:44:12.460332 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:44:12.460391 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:44:12.510214 1685746 cri.go:96] found id: ""
	I1222 01:44:12.510243 1685746 logs.go:282] 0 containers: []
	W1222 01:44:12.510252 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:44:12.510261 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:44:12.510281 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:44:12.574866 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:44:12.574895 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:44:12.630459 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:44:12.630495 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:44:12.645639 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:44:12.645667 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:44:12.715658 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:44:12.707654   12252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:12.708218   12252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:12.709705   12252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:12.710254   12252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:12.711749   12252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:44:12.707654   12252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:12.708218   12252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:12.709705   12252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:12.710254   12252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:12.711749   12252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:44:12.715678 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:44:12.715691 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:44:15.242028 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:44:15.253031 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:44:15.253105 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:44:15.283751 1685746 cri.go:96] found id: ""
	I1222 01:44:15.283784 1685746 logs.go:282] 0 containers: []
	W1222 01:44:15.283794 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:44:15.283800 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:44:15.283865 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:44:15.308803 1685746 cri.go:96] found id: ""
	I1222 01:44:15.308830 1685746 logs.go:282] 0 containers: []
	W1222 01:44:15.308840 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:44:15.308846 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:44:15.308911 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:44:15.334334 1685746 cri.go:96] found id: ""
	I1222 01:44:15.334362 1685746 logs.go:282] 0 containers: []
	W1222 01:44:15.334371 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:44:15.334378 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:44:15.334437 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:44:15.363819 1685746 cri.go:96] found id: ""
	I1222 01:44:15.363843 1685746 logs.go:282] 0 containers: []
	W1222 01:44:15.363852 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:44:15.363859 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:44:15.363920 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:44:15.389166 1685746 cri.go:96] found id: ""
	I1222 01:44:15.389194 1685746 logs.go:282] 0 containers: []
	W1222 01:44:15.389203 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:44:15.389211 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:44:15.389275 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:44:15.418948 1685746 cri.go:96] found id: ""
	I1222 01:44:15.419022 1685746 logs.go:282] 0 containers: []
	W1222 01:44:15.419035 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:44:15.419042 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:44:15.419135 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:44:15.446013 1685746 cri.go:96] found id: ""
	I1222 01:44:15.446105 1685746 logs.go:282] 0 containers: []
	W1222 01:44:15.446130 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:44:15.446162 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:44:15.446236 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:44:15.470779 1685746 cri.go:96] found id: ""
	I1222 01:44:15.470806 1685746 logs.go:282] 0 containers: []
	W1222 01:44:15.470815 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:44:15.470825 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:44:15.470857 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:44:15.551154 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:44:15.551246 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:44:15.578834 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:44:15.578861 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:44:15.644949 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:44:15.637144   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:15.637770   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:15.639327   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:15.639800   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:15.641301   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:44:15.637144   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:15.637770   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:15.639327   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:15.639800   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:15.641301   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:44:15.644969 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:44:15.644981 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:44:15.670551 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:44:15.670585 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:44:18.202679 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:44:18.213735 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:44:18.213812 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:44:18.239304 1685746 cri.go:96] found id: ""
	I1222 01:44:18.239327 1685746 logs.go:282] 0 containers: []
	W1222 01:44:18.239336 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:44:18.239342 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:44:18.239401 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:44:18.265064 1685746 cri.go:96] found id: ""
	I1222 01:44:18.265089 1685746 logs.go:282] 0 containers: []
	W1222 01:44:18.265098 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:44:18.265104 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:44:18.265165 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:44:18.290606 1685746 cri.go:96] found id: ""
	I1222 01:44:18.290642 1685746 logs.go:282] 0 containers: []
	W1222 01:44:18.290652 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:44:18.290659 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:44:18.290734 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:44:18.317208 1685746 cri.go:96] found id: ""
	I1222 01:44:18.317231 1685746 logs.go:282] 0 containers: []
	W1222 01:44:18.317240 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:44:18.317246 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:44:18.317306 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:44:18.342186 1685746 cri.go:96] found id: ""
	I1222 01:44:18.342207 1685746 logs.go:282] 0 containers: []
	W1222 01:44:18.342216 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:44:18.342222 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:44:18.342280 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:44:18.367436 1685746 cri.go:96] found id: ""
	I1222 01:44:18.367468 1685746 logs.go:282] 0 containers: []
	W1222 01:44:18.367477 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:44:18.367484 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:44:18.367572 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:44:18.392591 1685746 cri.go:96] found id: ""
	I1222 01:44:18.392616 1685746 logs.go:282] 0 containers: []
	W1222 01:44:18.392625 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:44:18.392632 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:44:18.392691 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:44:18.417782 1685746 cri.go:96] found id: ""
	I1222 01:44:18.417820 1685746 logs.go:282] 0 containers: []
	W1222 01:44:18.417829 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:44:18.417838 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:44:18.417850 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:44:18.475370 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:44:18.475402 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:44:18.496693 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:44:18.496722 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:44:18.602667 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:44:18.591806   12468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:18.592517   12468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:18.594327   12468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:18.595360   12468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:18.595864   12468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:44:18.591806   12468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:18.592517   12468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:18.594327   12468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:18.595360   12468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:18.595864   12468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:44:18.602690 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:44:18.602704 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:44:18.628074 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:44:18.628158 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:44:21.160991 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:44:21.171843 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:44:21.171925 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:44:21.197008 1685746 cri.go:96] found id: ""
	I1222 01:44:21.197035 1685746 logs.go:282] 0 containers: []
	W1222 01:44:21.197045 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:44:21.197051 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:44:21.197111 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:44:21.222701 1685746 cri.go:96] found id: ""
	I1222 01:44:21.222731 1685746 logs.go:282] 0 containers: []
	W1222 01:44:21.222740 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:44:21.222747 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:44:21.222812 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:44:21.247835 1685746 cri.go:96] found id: ""
	I1222 01:44:21.247858 1685746 logs.go:282] 0 containers: []
	W1222 01:44:21.247867 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:44:21.247874 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:44:21.247932 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:44:21.272366 1685746 cri.go:96] found id: ""
	I1222 01:44:21.272400 1685746 logs.go:282] 0 containers: []
	W1222 01:44:21.272411 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:44:21.272418 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:44:21.272483 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:44:21.297348 1685746 cri.go:96] found id: ""
	I1222 01:44:21.297375 1685746 logs.go:282] 0 containers: []
	W1222 01:44:21.297384 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:44:21.297391 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:44:21.297449 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:44:21.321989 1685746 cri.go:96] found id: ""
	I1222 01:44:21.322013 1685746 logs.go:282] 0 containers: []
	W1222 01:44:21.322022 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:44:21.322029 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:44:21.322112 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:44:21.350652 1685746 cri.go:96] found id: ""
	I1222 01:44:21.350677 1685746 logs.go:282] 0 containers: []
	W1222 01:44:21.350685 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:44:21.350691 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:44:21.350754 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:44:21.382678 1685746 cri.go:96] found id: ""
	I1222 01:44:21.382748 1685746 logs.go:282] 0 containers: []
	W1222 01:44:21.382773 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:44:21.382791 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:44:21.382804 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:44:21.438683 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:44:21.438718 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:44:21.453712 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:44:21.453745 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:44:21.571593 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:44:21.558586   12575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:21.559284   12575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:21.561942   12575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:21.562734   12575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:21.565012   12575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:44:21.558586   12575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:21.559284   12575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:21.561942   12575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:21.562734   12575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:21.565012   12575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:44:21.571621 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:44:21.571635 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:44:21.598254 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:44:21.598290 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:44:24.133046 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:44:24.144639 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:44:24.144716 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:44:24.170797 1685746 cri.go:96] found id: ""
	I1222 01:44:24.170821 1685746 logs.go:282] 0 containers: []
	W1222 01:44:24.170830 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:44:24.170838 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:44:24.170901 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:44:24.198790 1685746 cri.go:96] found id: ""
	I1222 01:44:24.198813 1685746 logs.go:282] 0 containers: []
	W1222 01:44:24.198822 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:44:24.198830 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:44:24.198892 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:44:24.223222 1685746 cri.go:96] found id: ""
	I1222 01:44:24.223245 1685746 logs.go:282] 0 containers: []
	W1222 01:44:24.223253 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:44:24.223259 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:44:24.223317 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:44:24.248490 1685746 cri.go:96] found id: ""
	I1222 01:44:24.248573 1685746 logs.go:282] 0 containers: []
	W1222 01:44:24.248590 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:44:24.248598 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:44:24.248678 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:44:24.273541 1685746 cri.go:96] found id: ""
	I1222 01:44:24.273570 1685746 logs.go:282] 0 containers: []
	W1222 01:44:24.273578 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:44:24.273585 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:44:24.273647 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:44:24.298819 1685746 cri.go:96] found id: ""
	I1222 01:44:24.298847 1685746 logs.go:282] 0 containers: []
	W1222 01:44:24.298856 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:44:24.298863 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:44:24.298921 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:44:24.324215 1685746 cri.go:96] found id: ""
	I1222 01:44:24.324316 1685746 logs.go:282] 0 containers: []
	W1222 01:44:24.324334 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:44:24.324341 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:44:24.324420 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:44:24.349700 1685746 cri.go:96] found id: ""
	I1222 01:44:24.349727 1685746 logs.go:282] 0 containers: []
	W1222 01:44:24.349736 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:44:24.349745 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:44:24.349756 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:44:24.405384 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:44:24.405419 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:44:24.420496 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:44:24.420524 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:44:24.481353 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:44:24.473051   12686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:24.473892   12686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:24.475622   12686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:24.475956   12686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:24.477524   12686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:44:24.473051   12686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:24.473892   12686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:24.475622   12686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:24.475956   12686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:24.477524   12686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:44:24.481378 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:44:24.481392 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:44:24.507731 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:44:24.508076 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:44:27.051455 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:44:27.062328 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:44:27.062402 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:44:27.088764 1685746 cri.go:96] found id: ""
	I1222 01:44:27.088786 1685746 logs.go:282] 0 containers: []
	W1222 01:44:27.088795 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:44:27.088801 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:44:27.088859 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:44:27.113929 1685746 cri.go:96] found id: ""
	I1222 01:44:27.113951 1685746 logs.go:282] 0 containers: []
	W1222 01:44:27.113959 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:44:27.113966 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:44:27.114027 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:44:27.139537 1685746 cri.go:96] found id: ""
	I1222 01:44:27.139562 1685746 logs.go:282] 0 containers: []
	W1222 01:44:27.139577 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:44:27.139584 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:44:27.139645 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:44:27.164769 1685746 cri.go:96] found id: ""
	I1222 01:44:27.164792 1685746 logs.go:282] 0 containers: []
	W1222 01:44:27.164800 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:44:27.164807 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:44:27.164867 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:44:27.190396 1685746 cri.go:96] found id: ""
	I1222 01:44:27.190424 1685746 logs.go:282] 0 containers: []
	W1222 01:44:27.190433 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:44:27.190440 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:44:27.190503 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:44:27.215574 1685746 cri.go:96] found id: ""
	I1222 01:44:27.215599 1685746 logs.go:282] 0 containers: []
	W1222 01:44:27.215608 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:44:27.215616 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:44:27.215684 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:44:27.246139 1685746 cri.go:96] found id: ""
	I1222 01:44:27.246162 1685746 logs.go:282] 0 containers: []
	W1222 01:44:27.246172 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:44:27.246178 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:44:27.246239 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:44:27.272153 1685746 cri.go:96] found id: ""
	I1222 01:44:27.272177 1685746 logs.go:282] 0 containers: []
	W1222 01:44:27.272185 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:44:27.272193 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:44:27.272205 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:44:27.303523 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:44:27.303552 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:44:27.363938 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:44:27.363985 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:44:27.380130 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:44:27.380163 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:44:27.443113 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:44:27.434806   12809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:27.435316   12809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:27.437169   12809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:27.437629   12809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:27.439083   12809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:44:27.434806   12809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:27.435316   12809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:27.437169   12809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:27.437629   12809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:27.439083   12809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:44:27.443137 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:44:27.443149 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:44:29.969751 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:44:29.980564 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:44:29.980638 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:44:30.027489 1685746 cri.go:96] found id: ""
	I1222 01:44:30.027515 1685746 logs.go:282] 0 containers: []
	W1222 01:44:30.027524 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:44:30.027532 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:44:30.027604 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:44:30.063116 1685746 cri.go:96] found id: ""
	I1222 01:44:30.063142 1685746 logs.go:282] 0 containers: []
	W1222 01:44:30.063152 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:44:30.063160 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:44:30.063229 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:44:30.111428 1685746 cri.go:96] found id: ""
	I1222 01:44:30.111455 1685746 logs.go:282] 0 containers: []
	W1222 01:44:30.111466 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:44:30.111473 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:44:30.111543 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:44:30.142346 1685746 cri.go:96] found id: ""
	I1222 01:44:30.142381 1685746 logs.go:282] 0 containers: []
	W1222 01:44:30.142391 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:44:30.142406 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:44:30.142499 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:44:30.171044 1685746 cri.go:96] found id: ""
	I1222 01:44:30.171068 1685746 logs.go:282] 0 containers: []
	W1222 01:44:30.171078 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:44:30.171084 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:44:30.171150 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:44:30.206010 1685746 cri.go:96] found id: ""
	I1222 01:44:30.206034 1685746 logs.go:282] 0 containers: []
	W1222 01:44:30.206044 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:44:30.206051 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:44:30.206225 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:44:30.235230 1685746 cri.go:96] found id: ""
	I1222 01:44:30.235255 1685746 logs.go:282] 0 containers: []
	W1222 01:44:30.235264 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:44:30.235272 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:44:30.235404 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:44:30.262624 1685746 cri.go:96] found id: ""
	I1222 01:44:30.262651 1685746 logs.go:282] 0 containers: []
	W1222 01:44:30.262661 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:44:30.262671 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:44:30.262689 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:44:30.320010 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:44:30.320048 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:44:30.336273 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:44:30.336303 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:44:30.407334 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:44:30.398947   12913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:30.399729   12913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:30.401509   12913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:30.402016   12913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:30.403552   12913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:44:30.398947   12913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:30.399729   12913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:30.401509   12913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:30.402016   12913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:30.403552   12913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:44:30.407358 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:44:30.407373 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:44:30.432976 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:44:30.433010 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:44:32.965996 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:44:32.976893 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:44:32.976972 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:44:33.004108 1685746 cri.go:96] found id: ""
	I1222 01:44:33.004138 1685746 logs.go:282] 0 containers: []
	W1222 01:44:33.004149 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:44:33.004157 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:44:33.004293 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:44:33.032305 1685746 cri.go:96] found id: ""
	I1222 01:44:33.032333 1685746 logs.go:282] 0 containers: []
	W1222 01:44:33.032343 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:44:33.032350 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:44:33.032410 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:44:33.060572 1685746 cri.go:96] found id: ""
	I1222 01:44:33.060600 1685746 logs.go:282] 0 containers: []
	W1222 01:44:33.060610 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:44:33.060616 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:44:33.060680 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:44:33.086067 1685746 cri.go:96] found id: ""
	I1222 01:44:33.086112 1685746 logs.go:282] 0 containers: []
	W1222 01:44:33.086122 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:44:33.086129 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:44:33.086188 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:44:33.112283 1685746 cri.go:96] found id: ""
	I1222 01:44:33.112310 1685746 logs.go:282] 0 containers: []
	W1222 01:44:33.112320 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:44:33.112326 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:44:33.112390 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:44:33.143337 1685746 cri.go:96] found id: ""
	I1222 01:44:33.143363 1685746 logs.go:282] 0 containers: []
	W1222 01:44:33.143372 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:44:33.143379 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:44:33.143441 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:44:33.169224 1685746 cri.go:96] found id: ""
	I1222 01:44:33.169250 1685746 logs.go:282] 0 containers: []
	W1222 01:44:33.169259 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:44:33.169267 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:44:33.169327 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:44:33.198401 1685746 cri.go:96] found id: ""
	I1222 01:44:33.198422 1685746 logs.go:282] 0 containers: []
	W1222 01:44:33.198431 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:44:33.198440 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:44:33.198451 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:44:33.256328 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:44:33.256364 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:44:33.271899 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:44:33.271930 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:44:33.338753 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:44:33.330214   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:33.330943   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:33.332678   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:33.333280   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:33.335093   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:44:33.330214   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:33.330943   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:33.332678   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:33.333280   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:33.335093   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:44:33.338786 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:44:33.338800 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:44:33.364007 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:44:33.364042 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:44:35.895269 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:44:35.906191 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:44:35.906266 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:44:35.931271 1685746 cri.go:96] found id: ""
	I1222 01:44:35.931297 1685746 logs.go:282] 0 containers: []
	W1222 01:44:35.931306 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:44:35.931313 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:44:35.931372 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:44:35.958259 1685746 cri.go:96] found id: ""
	I1222 01:44:35.958289 1685746 logs.go:282] 0 containers: []
	W1222 01:44:35.958298 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:44:35.958312 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:44:35.958414 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:44:35.982836 1685746 cri.go:96] found id: ""
	I1222 01:44:35.982861 1685746 logs.go:282] 0 containers: []
	W1222 01:44:35.982871 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:44:35.982877 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:44:35.982937 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:44:36.012610 1685746 cri.go:96] found id: ""
	I1222 01:44:36.012642 1685746 logs.go:282] 0 containers: []
	W1222 01:44:36.012652 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:44:36.012659 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:44:36.012739 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:44:36.039888 1685746 cri.go:96] found id: ""
	I1222 01:44:36.039914 1685746 logs.go:282] 0 containers: []
	W1222 01:44:36.039924 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:44:36.039933 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:44:36.039995 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:44:36.070115 1685746 cri.go:96] found id: ""
	I1222 01:44:36.070144 1685746 logs.go:282] 0 containers: []
	W1222 01:44:36.070153 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:44:36.070160 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:44:36.070220 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:44:36.095790 1685746 cri.go:96] found id: ""
	I1222 01:44:36.095871 1685746 logs.go:282] 0 containers: []
	W1222 01:44:36.095887 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:44:36.095896 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:44:36.095967 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:44:36.122442 1685746 cri.go:96] found id: ""
	I1222 01:44:36.122519 1685746 logs.go:282] 0 containers: []
	W1222 01:44:36.122531 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:44:36.122570 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:44:36.122585 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:44:36.151370 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:44:36.151396 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:44:36.206896 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:44:36.206937 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:44:36.222382 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:44:36.222413 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:44:36.290888 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:44:36.282237   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:36.282881   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:36.284438   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:36.284963   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:36.286512   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:44:36.282237   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:36.282881   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:36.284438   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:36.284963   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:36.286512   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:44:36.290912 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:44:36.290927 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:44:38.822770 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:44:38.837195 1685746 out.go:203] 
	W1222 01:44:38.840003 1685746 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1222 01:44:38.840044 1685746 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1222 01:44:38.840057 1685746 out.go:285] * Related issues:
	W1222 01:44:38.840077 1685746 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1222 01:44:38.840096 1685746 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1222 01:44:38.842944 1685746 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 22 01:38:37 newest-cni-869293 containerd[555]: time="2025-12-22T01:38:37.471912430Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 22 01:38:37 newest-cni-869293 containerd[555]: time="2025-12-22T01:38:37.471984422Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 22 01:38:37 newest-cni-869293 containerd[555]: time="2025-12-22T01:38:37.472090343Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 22 01:38:37 newest-cni-869293 containerd[555]: time="2025-12-22T01:38:37.472165782Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 22 01:38:37 newest-cni-869293 containerd[555]: time="2025-12-22T01:38:37.472253611Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 22 01:38:37 newest-cni-869293 containerd[555]: time="2025-12-22T01:38:37.472320967Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 22 01:38:37 newest-cni-869293 containerd[555]: time="2025-12-22T01:38:37.472396340Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 22 01:38:37 newest-cni-869293 containerd[555]: time="2025-12-22T01:38:37.472469810Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 22 01:38:37 newest-cni-869293 containerd[555]: time="2025-12-22T01:38:37.472535599Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 22 01:38:37 newest-cni-869293 containerd[555]: time="2025-12-22T01:38:37.472630435Z" level=info msg="Connect containerd service"
	Dec 22 01:38:37 newest-cni-869293 containerd[555]: time="2025-12-22T01:38:37.472973486Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 22 01:38:37 newest-cni-869293 containerd[555]: time="2025-12-22T01:38:37.473627974Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 22 01:38:37 newest-cni-869293 containerd[555]: time="2025-12-22T01:38:37.488429715Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 22 01:38:37 newest-cni-869293 containerd[555]: time="2025-12-22T01:38:37.488493453Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 22 01:38:37 newest-cni-869293 containerd[555]: time="2025-12-22T01:38:37.488524985Z" level=info msg="Start subscribing containerd event"
	Dec 22 01:38:37 newest-cni-869293 containerd[555]: time="2025-12-22T01:38:37.488575940Z" level=info msg="Start recovering state"
	Dec 22 01:38:37 newest-cni-869293 containerd[555]: time="2025-12-22T01:38:37.527785198Z" level=info msg="Start event monitor"
	Dec 22 01:38:37 newest-cni-869293 containerd[555]: time="2025-12-22T01:38:37.527839713Z" level=info msg="Start cni network conf syncer for default"
	Dec 22 01:38:37 newest-cni-869293 containerd[555]: time="2025-12-22T01:38:37.527850864Z" level=info msg="Start streaming server"
	Dec 22 01:38:37 newest-cni-869293 containerd[555]: time="2025-12-22T01:38:37.527863213Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 22 01:38:37 newest-cni-869293 containerd[555]: time="2025-12-22T01:38:37.527915176Z" level=info msg="runtime interface starting up..."
	Dec 22 01:38:37 newest-cni-869293 containerd[555]: time="2025-12-22T01:38:37.527922561Z" level=info msg="starting plugins..."
	Dec 22 01:38:37 newest-cni-869293 containerd[555]: time="2025-12-22T01:38:37.527953700Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 22 01:38:37 newest-cni-869293 containerd[555]: time="2025-12-22T01:38:37.528090677Z" level=info msg="containerd successfully booted in 0.081452s"
	Dec 22 01:38:37 newest-cni-869293 systemd[1]: Started containerd.service - containerd container runtime.
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:44:48.194806   13654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:48.195880   13654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:48.197002   13654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:48.197545   13654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:48.199236   13654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec22 00:03] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 01:44:48 up 1 day,  8:27,  0 user,  load average: 1.46, 0.97, 1.40
	Linux newest-cni-869293 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 22 01:44:44 newest-cni-869293 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 01:44:44 newest-cni-869293 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 01:44:44 newest-cni-869293 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:44:45 newest-cni-869293 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:44:45 newest-cni-869293 kubelet[13502]: E1222 01:44:45.741004   13502 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 01:44:45 newest-cni-869293 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 01:44:45 newest-cni-869293 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 01:44:46 newest-cni-869293 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	Dec 22 01:44:46 newest-cni-869293 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:44:46 newest-cni-869293 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:44:46 newest-cni-869293 kubelet[13540]: E1222 01:44:46.573295   13540 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 01:44:46 newest-cni-869293 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 01:44:46 newest-cni-869293 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 01:44:47 newest-cni-869293 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2.
	Dec 22 01:44:47 newest-cni-869293 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:44:47 newest-cni-869293 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:44:47 newest-cni-869293 kubelet[13558]: E1222 01:44:47.369338   13558 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 01:44:47 newest-cni-869293 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 01:44:47 newest-cni-869293 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 01:44:47 newest-cni-869293 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3.
	Dec 22 01:44:47 newest-cni-869293 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:44:47 newest-cni-869293 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:44:48 newest-cni-869293 kubelet[13633]: E1222 01:44:48.092266   13633 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 01:44:48 newest-cni-869293 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 01:44:48 newest-cni-869293 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-869293 -n newest-cni-869293
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-869293 -n newest-cni-869293: exit status 2 (385.079066ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "newest-cni-869293" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-869293
helpers_test.go:244: (dbg) docker inspect newest-cni-869293:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "05e1fe12904ba3e2f69bb80f55f1eaac2e2f59b2b380c077c133943a7ff0a16e",
	        "Created": "2025-12-22T01:28:35.561963158Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1685878,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-22T01:38:31.964858425Z",
	            "FinishedAt": "2025-12-22T01:38:30.65991944Z"
	        },
	        "Image": "sha256:065a636b8735485f57df2b02ed6532902f189a9c5dc304ae0ae68a778e1c9b2c",
	        "ResolvConfPath": "/var/lib/docker/containers/05e1fe12904ba3e2f69bb80f55f1eaac2e2f59b2b380c077c133943a7ff0a16e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/05e1fe12904ba3e2f69bb80f55f1eaac2e2f59b2b380c077c133943a7ff0a16e/hostname",
	        "HostsPath": "/var/lib/docker/containers/05e1fe12904ba3e2f69bb80f55f1eaac2e2f59b2b380c077c133943a7ff0a16e/hosts",
	        "LogPath": "/var/lib/docker/containers/05e1fe12904ba3e2f69bb80f55f1eaac2e2f59b2b380c077c133943a7ff0a16e/05e1fe12904ba3e2f69bb80f55f1eaac2e2f59b2b380c077c133943a7ff0a16e-json.log",
	        "Name": "/newest-cni-869293",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-869293:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-869293",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "05e1fe12904ba3e2f69bb80f55f1eaac2e2f59b2b380c077c133943a7ff0a16e",
	                "LowerDir": "/var/lib/docker/overlay2/158fbaf3ed5d82f864f18e0e961f02de684f670ee24faf71f1ec33887e356946-init/diff:/var/lib/docker/overlay2/fdfc0dccbb3b6766b7981a5669907f62e120bbe774767ccbee34e6115374625f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/158fbaf3ed5d82f864f18e0e961f02de684f670ee24faf71f1ec33887e356946/merged",
	                "UpperDir": "/var/lib/docker/overlay2/158fbaf3ed5d82f864f18e0e961f02de684f670ee24faf71f1ec33887e356946/diff",
	                "WorkDir": "/var/lib/docker/overlay2/158fbaf3ed5d82f864f18e0e961f02de684f670ee24faf71f1ec33887e356946/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-869293",
	                "Source": "/var/lib/docker/volumes/newest-cni-869293/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-869293",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-869293",
	                "name.minikube.sigs.k8s.io": "newest-cni-869293",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e62360fe6e0fa793fd3d0004ae901a019cba72f07e506d4e4de6097400773d18",
	            "SandboxKey": "/var/run/docker/netns/e62360fe6e0f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38707"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38708"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38711"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38709"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38710"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-869293": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "06:95:8a:54:97:ec",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "237b6ac5b33ea8f647685859c16cf161283b5f3d52eea65816f2e7dfeb4ec191",
	                    "EndpointID": "5a4926332b20d8c327aefbaecbda7375782c9a567c1a86203a3a41986fbfb8d5",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-869293",
	                        "05e1fe12904b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-869293 -n newest-cni-869293
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-869293 -n newest-cni-869293: exit status 2 (318.29206ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-869293 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-869293 logs -n 25: (1.655228545s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───
──────────────────┐
	│ COMMAND │                                                                                                                           ARGS                                                                                                                           │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───
──────────────────┤
	│ delete  │ -p default-k8s-diff-port-778490                                                                                                                                                                                                                          │ default-k8s-diff-port-778490 │ jenkins │ v1.37.0 │ 22 Dec 25 01:25 UTC │ 22 Dec 25 01:26 UTC │
	│ delete  │ -p default-k8s-diff-port-778490                                                                                                                                                                                                                          │ default-k8s-diff-port-778490 │ jenkins │ v1.37.0 │ 22 Dec 25 01:26 UTC │ 22 Dec 25 01:26 UTC │
	│ delete  │ -p disable-driver-mounts-459348                                                                                                                                                                                                                          │ disable-driver-mounts-459348 │ jenkins │ v1.37.0 │ 22 Dec 25 01:26 UTC │ 22 Dec 25 01:26 UTC │
	│ start   │ -p no-preload-154186 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-154186            │ jenkins │ v1.37.0 │ 22 Dec 25 01:26 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-980842 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                 │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:27 UTC │ 22 Dec 25 01:27 UTC │
	│ stop    │ -p embed-certs-980842 --alsologtostderr -v=3                                                                                                                                                                                                             │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:27 UTC │ 22 Dec 25 01:27 UTC │
	│ addons  │ enable dashboard -p embed-certs-980842 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                            │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:27 UTC │ 22 Dec 25 01:27 UTC │
	│ start   │ -p embed-certs-980842 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                                             │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:27 UTC │ 22 Dec 25 01:28 UTC │
	│ image   │ embed-certs-980842 image list --format=json                                                                                                                                                                                                              │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:28 UTC │ 22 Dec 25 01:28 UTC │
	│ pause   │ -p embed-certs-980842 --alsologtostderr -v=1                                                                                                                                                                                                             │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:28 UTC │ 22 Dec 25 01:28 UTC │
	│ unpause │ -p embed-certs-980842 --alsologtostderr -v=1                                                                                                                                                                                                             │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:28 UTC │ 22 Dec 25 01:28 UTC │
	│ delete  │ -p embed-certs-980842                                                                                                                                                                                                                                    │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:28 UTC │ 22 Dec 25 01:28 UTC │
	│ delete  │ -p embed-certs-980842                                                                                                                                                                                                                                    │ embed-certs-980842           │ jenkins │ v1.37.0 │ 22 Dec 25 01:28 UTC │ 22 Dec 25 01:28 UTC │
	│ start   │ -p newest-cni-869293 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ newest-cni-869293            │ jenkins │ v1.37.0 │ 22 Dec 25 01:28 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-154186 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                  │ no-preload-154186            │ jenkins │ v1.37.0 │ 22 Dec 25 01:34 UTC │                     │
	│ stop    │ -p no-preload-154186 --alsologtostderr -v=3                                                                                                                                                                                                              │ no-preload-154186            │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │ 22 Dec 25 01:36 UTC │
	│ addons  │ enable dashboard -p no-preload-154186 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                             │ no-preload-154186            │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │ 22 Dec 25 01:36 UTC │
	│ start   │ -p no-preload-154186 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-154186            │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-869293 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                  │ newest-cni-869293            │ jenkins │ v1.37.0 │ 22 Dec 25 01:36 UTC │                     │
	│ stop    │ -p newest-cni-869293 --alsologtostderr -v=3                                                                                                                                                                                                              │ newest-cni-869293            │ jenkins │ v1.37.0 │ 22 Dec 25 01:38 UTC │ 22 Dec 25 01:38 UTC │
	│ addons  │ enable dashboard -p newest-cni-869293 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                             │ newest-cni-869293            │ jenkins │ v1.37.0 │ 22 Dec 25 01:38 UTC │ 22 Dec 25 01:38 UTC │
	│ start   │ -p newest-cni-869293 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ newest-cni-869293            │ jenkins │ v1.37.0 │ 22 Dec 25 01:38 UTC │                     │
	│ image   │ newest-cni-869293 image list --format=json                                                                                                                                                                                                               │ newest-cni-869293            │ jenkins │ v1.37.0 │ 22 Dec 25 01:44 UTC │ 22 Dec 25 01:44 UTC │
	│ pause   │ -p newest-cni-869293 --alsologtostderr -v=1                                                                                                                                                                                                              │ newest-cni-869293            │ jenkins │ v1.37.0 │ 22 Dec 25 01:44 UTC │ 22 Dec 25 01:44 UTC │
	│ unpause │ -p newest-cni-869293 --alsologtostderr -v=1                                                                                                                                                                                                              │ newest-cni-869293            │ jenkins │ v1.37.0 │ 22 Dec 25 01:44 UTC │ 22 Dec 25 01:44 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───
──────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/22 01:38:31
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1222 01:38:31.686572 1685746 out.go:360] Setting OutFile to fd 1 ...
	I1222 01:38:31.686782 1685746 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:38:31.686816 1685746 out.go:374] Setting ErrFile to fd 2...
	I1222 01:38:31.686836 1685746 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:38:31.687133 1685746 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1395000/.minikube/bin
	I1222 01:38:31.687563 1685746 out.go:368] Setting JSON to false
	I1222 01:38:31.688584 1685746 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":116465,"bootTime":1766251047,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1222 01:38:31.688686 1685746 start.go:143] virtualization:  
	I1222 01:38:31.691576 1685746 out.go:179] * [newest-cni-869293] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1222 01:38:31.695464 1685746 out.go:179]   - MINIKUBE_LOCATION=22179
	I1222 01:38:31.695552 1685746 notify.go:221] Checking for updates...
	I1222 01:38:31.701535 1685746 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 01:38:31.704637 1685746 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-1395000/kubeconfig
	I1222 01:38:31.707560 1685746 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1395000/.minikube
	I1222 01:38:31.710534 1685746 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1222 01:38:31.713575 1685746 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 01:38:31.717166 1685746 config.go:182] Loaded profile config "newest-cni-869293": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1222 01:38:31.717762 1685746 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 01:38:31.753414 1685746 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1222 01:38:31.753539 1685746 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:38:31.812499 1685746 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 01:38:31.803096079 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:38:31.812613 1685746 docker.go:319] overlay module found
	I1222 01:38:31.815770 1685746 out.go:179] * Using the docker driver based on existing profile
	I1222 01:38:31.818545 1685746 start.go:309] selected driver: docker
	I1222 01:38:31.818566 1685746 start.go:928] validating driver "docker" against &{Name:newest-cni-869293 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-869293 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:38:31.818662 1685746 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 01:38:31.819384 1685746 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:38:31.880587 1685746 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 01:38:31.870819289 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:38:31.880955 1685746 start_flags.go:1014] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1222 01:38:31.880984 1685746 cni.go:84] Creating CNI manager for ""
	I1222 01:38:31.881038 1685746 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1222 01:38:31.881081 1685746 start.go:353] cluster config:
	{Name:newest-cni-869293 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-869293 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:38:31.884279 1685746 out.go:179] * Starting "newest-cni-869293" primary control-plane node in "newest-cni-869293" cluster
	I1222 01:38:31.887056 1685746 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1222 01:38:31.890043 1685746 out.go:179] * Pulling base image v0.0.48-1766219634-22260 ...
	I1222 01:38:31.892868 1685746 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1222 01:38:31.892919 1685746 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4
	I1222 01:38:31.892932 1685746 cache.go:65] Caching tarball of preloaded images
	I1222 01:38:31.892952 1685746 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1222 01:38:31.893022 1685746 preload.go:251] Found /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1222 01:38:31.893039 1685746 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on containerd
	I1222 01:38:31.893153 1685746 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/config.json ...
	I1222 01:38:31.913018 1685746 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon, skipping pull
	I1222 01:38:31.913041 1685746 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 exists in daemon, skipping load
	I1222 01:38:31.913060 1685746 cache.go:243] Successfully downloaded all kic artifacts
	I1222 01:38:31.913090 1685746 start.go:360] acquireMachinesLock for newest-cni-869293: {Name:mke3b6ee6da4cf7fd78c9e3f2e52f52ceafca24c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:38:31.913180 1685746 start.go:364] duration metric: took 44.275µs to acquireMachinesLock for "newest-cni-869293"
	I1222 01:38:31.913204 1685746 start.go:96] Skipping create...Using existing machine configuration
	I1222 01:38:31.913210 1685746 fix.go:54] fixHost starting: 
	I1222 01:38:31.913477 1685746 cli_runner.go:164] Run: docker container inspect newest-cni-869293 --format={{.State.Status}}
	I1222 01:38:31.930780 1685746 fix.go:112] recreateIfNeeded on newest-cni-869293: state=Stopped err=<nil>
	W1222 01:38:31.930815 1685746 fix.go:138] unexpected machine state, will restart: <nil>
	W1222 01:38:29.750532 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:38:32.248109 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:38:31.934050 1685746 out.go:252] * Restarting existing docker container for "newest-cni-869293" ...
	I1222 01:38:31.934152 1685746 cli_runner.go:164] Run: docker start newest-cni-869293
	I1222 01:38:32.204881 1685746 cli_runner.go:164] Run: docker container inspect newest-cni-869293 --format={{.State.Status}}
	I1222 01:38:32.243691 1685746 kic.go:430] container "newest-cni-869293" state is running.
	I1222 01:38:32.244096 1685746 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-869293
	I1222 01:38:32.265947 1685746 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/config.json ...
	I1222 01:38:32.266210 1685746 machine.go:94] provisionDockerMachine start ...
	I1222 01:38:32.266268 1685746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:38:32.293919 1685746 main.go:144] libmachine: Using SSH client type: native
	I1222 01:38:32.294281 1685746 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38707 <nil> <nil>}
	I1222 01:38:32.294292 1685746 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 01:38:32.294932 1685746 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54476->127.0.0.1:38707: read: connection reset by peer
	I1222 01:38:35.433786 1685746 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-869293
	
	I1222 01:38:35.433813 1685746 ubuntu.go:182] provisioning hostname "newest-cni-869293"
	I1222 01:38:35.433886 1685746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:38:35.451516 1685746 main.go:144] libmachine: Using SSH client type: native
	I1222 01:38:35.451830 1685746 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38707 <nil> <nil>}
	I1222 01:38:35.451848 1685746 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-869293 && echo "newest-cni-869293" | sudo tee /etc/hostname
	I1222 01:38:35.591409 1685746 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-869293
	
	I1222 01:38:35.591519 1685746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:38:35.609341 1685746 main.go:144] libmachine: Using SSH client type: native
	I1222 01:38:35.609647 1685746 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38707 <nil> <nil>}
	I1222 01:38:35.609670 1685746 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-869293' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-869293/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-869293' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1222 01:38:35.742798 1685746 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 01:38:35.742824 1685746 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22179-1395000/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-1395000/.minikube}
	I1222 01:38:35.742864 1685746 ubuntu.go:190] setting up certificates
	I1222 01:38:35.742881 1685746 provision.go:84] configureAuth start
	I1222 01:38:35.742942 1685746 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-869293
	I1222 01:38:35.763152 1685746 provision.go:143] copyHostCerts
	I1222 01:38:35.763214 1685746 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.pem, removing ...
	I1222 01:38:35.763230 1685746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.pem
	I1222 01:38:35.763306 1685746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.pem (1082 bytes)
	I1222 01:38:35.763401 1685746 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1395000/.minikube/cert.pem, removing ...
	I1222 01:38:35.763407 1685746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1395000/.minikube/cert.pem
	I1222 01:38:35.763431 1685746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-1395000/.minikube/cert.pem (1123 bytes)
	I1222 01:38:35.763483 1685746 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1395000/.minikube/key.pem, removing ...
	I1222 01:38:35.763490 1685746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1395000/.minikube/key.pem
	I1222 01:38:35.763514 1685746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-1395000/.minikube/key.pem (1679 bytes)
	I1222 01:38:35.763557 1685746 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca-key.pem org=jenkins.newest-cni-869293 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-869293]
	I1222 01:38:35.889485 1685746 provision.go:177] copyRemoteCerts
	I1222 01:38:35.889557 1685746 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1222 01:38:35.889605 1685746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:38:35.914143 1685746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38707 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/newest-cni-869293/id_rsa Username:docker}
	I1222 01:38:36.016150 1685746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1222 01:38:36.035930 1685746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1222 01:38:36.054716 1685746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1222 01:38:36.072586 1685746 provision.go:87] duration metric: took 329.680992ms to configureAuth
	I1222 01:38:36.072618 1685746 ubuntu.go:206] setting minikube options for container-runtime
	I1222 01:38:36.072830 1685746 config.go:182] Loaded profile config "newest-cni-869293": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1222 01:38:36.072842 1685746 machine.go:97] duration metric: took 3.806623107s to provisionDockerMachine
	I1222 01:38:36.072850 1685746 start.go:293] postStartSetup for "newest-cni-869293" (driver="docker")
	I1222 01:38:36.072866 1685746 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1222 01:38:36.072926 1685746 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1222 01:38:36.072980 1685746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:38:36.090324 1685746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38707 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/newest-cni-869293/id_rsa Username:docker}
	I1222 01:38:36.187013 1685746 ssh_runner.go:195] Run: cat /etc/os-release
	I1222 01:38:36.191029 1685746 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1222 01:38:36.191111 1685746 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1222 01:38:36.191134 1685746 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1395000/.minikube/addons for local assets ...
	I1222 01:38:36.191215 1685746 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1395000/.minikube/files for local assets ...
	I1222 01:38:36.191355 1685746 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem -> 13968642.pem in /etc/ssl/certs
	I1222 01:38:36.191477 1685746 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1222 01:38:36.200008 1685746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem --> /etc/ssl/certs/13968642.pem (1708 bytes)
	I1222 01:38:36.219292 1685746 start.go:296] duration metric: took 146.420744ms for postStartSetup
	I1222 01:38:36.219381 1685746 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 01:38:36.219430 1685746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:38:36.237412 1685746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38707 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/newest-cni-869293/id_rsa Username:docker}
	I1222 01:38:36.336664 1685746 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1222 01:38:36.342619 1685746 fix.go:56] duration metric: took 4.429400761s for fixHost
	I1222 01:38:36.342646 1685746 start.go:83] releasing machines lock for "newest-cni-869293", held for 4.429452897s
	I1222 01:38:36.342750 1685746 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-869293
	I1222 01:38:36.362211 1685746 ssh_runner.go:195] Run: cat /version.json
	I1222 01:38:36.362264 1685746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:38:36.362344 1685746 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1222 01:38:36.362407 1685746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:38:36.385216 1685746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38707 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/newest-cni-869293/id_rsa Username:docker}
	I1222 01:38:36.393122 1685746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38707 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/newest-cni-869293/id_rsa Username:docker}
	I1222 01:38:36.571819 1685746 ssh_runner.go:195] Run: systemctl --version
	I1222 01:38:36.578591 1685746 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1222 01:38:36.583121 1685746 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1222 01:38:36.583193 1685746 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1222 01:38:36.591539 1685746 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1222 01:38:36.591564 1685746 start.go:496] detecting cgroup driver to use...
	I1222 01:38:36.591620 1685746 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 01:38:36.591689 1685746 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1222 01:38:36.609980 1685746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1222 01:38:36.623763 1685746 docker.go:218] disabling cri-docker service (if available) ...
	I1222 01:38:36.623883 1685746 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1222 01:38:36.639236 1685746 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1222 01:38:36.652937 1685746 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1222 01:38:36.763224 1685746 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1222 01:38:36.883204 1685746 docker.go:234] disabling docker service ...
	I1222 01:38:36.883275 1685746 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1222 01:38:36.898372 1685746 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1222 01:38:36.911453 1685746 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1222 01:38:37.034252 1685746 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1222 01:38:37.157335 1685746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1222 01:38:37.170564 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 01:38:37.185195 1685746 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1222 01:38:37.194710 1685746 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1222 01:38:37.204647 1685746 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1222 01:38:37.204731 1685746 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1222 01:38:37.214808 1685746 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1222 01:38:37.223830 1685746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1222 01:38:37.232600 1685746 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1222 01:38:37.242680 1685746 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1222 01:38:37.254369 1685746 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1222 01:38:37.265094 1685746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1222 01:38:37.278711 1685746 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1222 01:38:37.288297 1685746 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1222 01:38:37.299386 1685746 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1222 01:38:37.306803 1685746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:38:37.412668 1685746 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1222 01:38:37.531042 1685746 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1222 01:38:37.531187 1685746 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1222 01:38:37.535291 1685746 start.go:564] Will wait 60s for crictl version
	I1222 01:38:37.535398 1685746 ssh_runner.go:195] Run: which crictl
	I1222 01:38:37.539239 1685746 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1222 01:38:37.568186 1685746 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.1
	RuntimeApiVersion:  v1
	I1222 01:38:37.568329 1685746 ssh_runner.go:195] Run: containerd --version
	I1222 01:38:37.589324 1685746 ssh_runner.go:195] Run: containerd --version
	I1222 01:38:37.614497 1685746 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.1 ...
	I1222 01:38:37.617592 1685746 cli_runner.go:164] Run: docker network inspect newest-cni-869293 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 01:38:37.633737 1685746 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1222 01:38:37.637631 1685746 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 01:38:37.650774 1685746 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1222 01:38:34.249047 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:38:36.748953 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:38:37.653725 1685746 kubeadm.go:884] updating cluster {Name:newest-cni-869293 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-869293 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1222 01:38:37.653882 1685746 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1222 01:38:37.653965 1685746 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 01:38:37.679481 1685746 containerd.go:627] all images are preloaded for containerd runtime.
	I1222 01:38:37.679507 1685746 containerd.go:534] Images already preloaded, skipping extraction
	I1222 01:38:37.679567 1685746 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 01:38:37.707944 1685746 containerd.go:627] all images are preloaded for containerd runtime.
	I1222 01:38:37.707969 1685746 cache_images.go:86] Images are preloaded, skipping loading
	I1222 01:38:37.707979 1685746 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-rc.1 containerd true true} ...
	I1222 01:38:37.708083 1685746 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-869293 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-869293 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1222 01:38:37.708165 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
	I1222 01:38:37.740577 1685746 cni.go:84] Creating CNI manager for ""
	I1222 01:38:37.740600 1685746 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1222 01:38:37.740621 1685746 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1222 01:38:37.740645 1685746 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-869293 NodeName:newest-cni-869293 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1222 01:38:37.740759 1685746 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-869293"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1222 01:38:37.740831 1685746 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1222 01:38:37.749395 1685746 binaries.go:51] Found k8s binaries, skipping transfer
	I1222 01:38:37.749470 1685746 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1222 01:38:37.757587 1685746 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1222 01:38:37.770794 1685746 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1222 01:38:37.784049 1685746 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2233 bytes)
	I1222 01:38:37.797792 1685746 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1222 01:38:37.801552 1685746 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 01:38:37.811598 1685746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:38:37.940636 1685746 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 01:38:37.962625 1685746 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293 for IP: 192.168.76.2
	I1222 01:38:37.962649 1685746 certs.go:195] generating shared ca certs ...
	I1222 01:38:37.962682 1685746 certs.go:227] acquiring lock for ca certs: {Name:mk4e1172e73c8d9b926824a39d7e920772302ed7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:38:37.962837 1685746 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.key
	I1222 01:38:37.962900 1685746 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.key
	I1222 01:38:37.962912 1685746 certs.go:257] generating profile certs ...
	I1222 01:38:37.963014 1685746 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/client.key
	I1222 01:38:37.963084 1685746 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/apiserver.key.70db33ce
	I1222 01:38:37.963128 1685746 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/proxy-client.key
	I1222 01:38:37.963238 1685746 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/1396864.pem (1338 bytes)
	W1222 01:38:37.963276 1685746 certs.go:480] ignoring /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/1396864_empty.pem, impossibly tiny 0 bytes
	I1222 01:38:37.963287 1685746 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca-key.pem (1675 bytes)
	I1222 01:38:37.963316 1685746 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem (1082 bytes)
	I1222 01:38:37.963343 1685746 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem (1123 bytes)
	I1222 01:38:37.963379 1685746 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/key.pem (1679 bytes)
	I1222 01:38:37.963434 1685746 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem (1708 bytes)
	I1222 01:38:37.964596 1685746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1222 01:38:37.999913 1685746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1222 01:38:38.025465 1685746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1222 01:38:38.053443 1685746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1222 01:38:38.087732 1685746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1222 01:38:38.107200 1685746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1222 01:38:38.125482 1685746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1222 01:38:38.143284 1685746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/newest-cni-869293/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1222 01:38:38.161557 1685746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem --> /usr/share/ca-certificates/13968642.pem (1708 bytes)
	I1222 01:38:38.180124 1685746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1222 01:38:38.198446 1685746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/1396864.pem --> /usr/share/ca-certificates/1396864.pem (1338 bytes)
	I1222 01:38:38.215766 1685746 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1222 01:38:38.228774 1685746 ssh_runner.go:195] Run: openssl version
	I1222 01:38:38.235631 1685746 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/13968642.pem
	I1222 01:38:38.244039 1685746 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/13968642.pem /etc/ssl/certs/13968642.pem
	I1222 01:38:38.252123 1685746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13968642.pem
	I1222 01:38:38.256169 1685746 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 00:13 /usr/share/ca-certificates/13968642.pem
	I1222 01:38:38.256240 1685746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13968642.pem
	I1222 01:38:38.297738 1685746 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1222 01:38:38.305673 1685746 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:38:38.313250 1685746 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1222 01:38:38.321143 1685746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:38:38.325161 1685746 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 00:04 /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:38:38.325259 1685746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:38:38.366760 1685746 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1222 01:38:38.375589 1685746 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1396864.pem
	I1222 01:38:38.383142 1685746 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1396864.pem /etc/ssl/certs/1396864.pem
	I1222 01:38:38.391262 1685746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1396864.pem
	I1222 01:38:38.395405 1685746 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 00:13 /usr/share/ca-certificates/1396864.pem
	I1222 01:38:38.395474 1685746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1396864.pem
	I1222 01:38:38.436708 1685746 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1222 01:38:38.444445 1685746 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 01:38:38.448390 1685746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1222 01:38:38.489618 1685746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1222 01:38:38.530725 1685746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1222 01:38:38.571636 1685746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1222 01:38:38.612592 1685746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1222 01:38:38.653872 1685746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1222 01:38:38.695135 1685746 kubeadm.go:401] StartCluster: {Name:newest-cni-869293 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-869293 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.
L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:38:38.695236 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1222 01:38:38.695304 1685746 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 01:38:38.730406 1685746 cri.go:96] found id: ""
	I1222 01:38:38.730480 1685746 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1222 01:38:38.742929 1685746 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1222 01:38:38.742952 1685746 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1222 01:38:38.743012 1685746 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1222 01:38:38.765617 1685746 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1222 01:38:38.766245 1685746 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-869293" does not appear in /home/jenkins/minikube-integration/22179-1395000/kubeconfig
	I1222 01:38:38.766510 1685746 kubeconfig.go:62] /home/jenkins/minikube-integration/22179-1395000/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-869293" cluster setting kubeconfig missing "newest-cni-869293" context setting]
	I1222 01:38:38.766957 1685746 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/kubeconfig: {Name:mk08a9ce05fb3d2aec66c2d4776a38c1f64c42df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:38:38.768687 1685746 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1222 01:38:38.776658 1685746 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1222 01:38:38.776695 1685746 kubeadm.go:602] duration metric: took 33.737033ms to restartPrimaryControlPlane
	I1222 01:38:38.776705 1685746 kubeadm.go:403] duration metric: took 81.581475ms to StartCluster
	I1222 01:38:38.776720 1685746 settings.go:142] acquiring lock: {Name:mk10fbc51e5384d725fd46ab36048358281cb87b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:38:38.776793 1685746 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22179-1395000/kubeconfig
	I1222 01:38:38.777670 1685746 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/kubeconfig: {Name:mk08a9ce05fb3d2aec66c2d4776a38c1f64c42df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:38:38.777888 1685746 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1222 01:38:38.778285 1685746 config.go:182] Loaded profile config "newest-cni-869293": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1222 01:38:38.778259 1685746 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1222 01:38:38.778393 1685746 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-869293"
	I1222 01:38:38.778408 1685746 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-869293"
	I1222 01:38:38.778433 1685746 host.go:66] Checking if "newest-cni-869293" exists ...
	I1222 01:38:38.778917 1685746 cli_runner.go:164] Run: docker container inspect newest-cni-869293 --format={{.State.Status}}
	I1222 01:38:38.779098 1685746 addons.go:70] Setting dashboard=true in profile "newest-cni-869293"
	I1222 01:38:38.779126 1685746 addons.go:239] Setting addon dashboard=true in "newest-cni-869293"
	W1222 01:38:38.779211 1685746 addons.go:248] addon dashboard should already be in state true
	I1222 01:38:38.779264 1685746 host.go:66] Checking if "newest-cni-869293" exists ...
	I1222 01:38:38.779355 1685746 addons.go:70] Setting default-storageclass=true in profile "newest-cni-869293"
	I1222 01:38:38.779382 1685746 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-869293"
	I1222 01:38:38.779657 1685746 cli_runner.go:164] Run: docker container inspect newest-cni-869293 --format={{.State.Status}}
	I1222 01:38:38.780717 1685746 cli_runner.go:164] Run: docker container inspect newest-cni-869293 --format={{.State.Status}}
	I1222 01:38:38.783183 1685746 out.go:179] * Verifying Kubernetes components...
	I1222 01:38:38.795835 1685746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:38:38.839727 1685746 addons.go:239] Setting addon default-storageclass=true in "newest-cni-869293"
	I1222 01:38:38.839773 1685746 host.go:66] Checking if "newest-cni-869293" exists ...
	I1222 01:38:38.844706 1685746 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1222 01:38:38.844788 1685746 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1222 01:38:38.845056 1685746 cli_runner.go:164] Run: docker container inspect newest-cni-869293 --format={{.State.Status}}
	I1222 01:38:38.847706 1685746 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 01:38:38.847732 1685746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1222 01:38:38.847798 1685746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:38:38.850623 1685746 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1222 01:38:38.856243 1685746 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1222 01:38:38.856273 1685746 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1222 01:38:38.856351 1685746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:38:38.873943 1685746 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1222 01:38:38.873976 1685746 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1222 01:38:38.874046 1685746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-869293
	I1222 01:38:38.897069 1685746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38707 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/newest-cni-869293/id_rsa Username:docker}
	I1222 01:38:38.917887 1685746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38707 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/newest-cni-869293/id_rsa Username:docker}
	I1222 01:38:38.925239 1685746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38707 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/newest-cni-869293/id_rsa Username:docker}
	I1222 01:38:39.040289 1685746 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 01:38:39.062591 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 01:38:39.071403 1685746 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1222 01:38:39.071429 1685746 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1222 01:38:39.085714 1685746 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1222 01:38:39.085742 1685746 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1222 01:38:39.113564 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1222 01:38:39.117642 1685746 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1222 01:38:39.117668 1685746 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1222 01:38:39.160317 1685746 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1222 01:38:39.160342 1685746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1222 01:38:39.179666 1685746 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1222 01:38:39.179693 1685746 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1222 01:38:39.195940 1685746 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1222 01:38:39.195967 1685746 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1222 01:38:39.211128 1685746 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1222 01:38:39.211152 1685746 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1222 01:38:39.229341 1685746 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1222 01:38:39.229367 1685746 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1222 01:38:39.242863 1685746 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1222 01:38:39.242891 1685746 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1222 01:38:39.257396 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1222 01:38:39.740898 1685746 api_server.go:52] waiting for apiserver process to appear ...
	I1222 01:38:39.740996 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:38:39.741091 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:38:39.741148 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:39.741150 1685746 retry.go:84] will retry after 300ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:38:39.741362 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:39.924082 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:38:39.987453 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:40.012530 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 01:38:40.076254 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:38:40.106299 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:38:40.156991 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:40.241110 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:40.291973 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1222 01:38:40.350617 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:38:40.361182 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:40.389531 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:38:40.437774 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:38:40.465333 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:40.692837 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1222 01:38:40.741460 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:38:40.766384 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:40.961925 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 01:38:40.997418 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:38:41.047996 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:38:41.103696 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:41.241962 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:41.674831 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:38:39.248045 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:38:41.248244 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:38:41.741299 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:38:41.744404 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:42.118142 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:38:42.189177 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:42.241414 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:42.263947 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:38:42.333305 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:42.741698 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:43.241589 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:43.265699 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:38:43.338843 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:43.509282 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 01:38:43.559893 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:38:43.581660 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:38:43.623026 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:43.741112 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:44.241130 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:44.741229 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:44.931703 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:38:45.008485 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:45.244431 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:45.741178 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:45.765524 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:38:45.843868 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:45.977122 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:38:46.040374 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:46.241453 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:46.486248 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:38:46.559168 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:38:43.248311 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:38:45.249134 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:38:47.748896 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:38:46.741869 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:47.241095 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:47.741431 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:48.241112 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:48.294921 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:38:48.361284 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:48.741773 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:48.852570 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:38:48.911873 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:49.241377 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:49.368148 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:38:49.429800 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:49.741220 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:50.241219 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:50.741547 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:51.241159 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:38:49.748932 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:38:52.248838 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:38:51.741774 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:52.241901 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:52.391494 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:38:52.452597 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:52.452636 1685746 retry.go:84] will retry after 11.6s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:52.508552 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:38:52.579056 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:52.741603 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:53.241037 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:53.297681 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:38:53.358617 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:53.741128 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:54.241259 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:54.741444 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:55.241131 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:55.741185 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:56.241903 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:38:54.748014 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:38:56.748733 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:38:56.742022 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:56.871217 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:38:56.931377 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:56.931421 1685746 retry.go:84] will retry after 12.9s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:38:57.241904 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:57.741132 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:58.241082 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:58.741129 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:59.241514 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:38:59.741571 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:00.241104 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:00.342627 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:39:00.433191 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:39:00.433235 1685746 retry.go:84] will retry after 8.1s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:39:00.741833 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:01.241455 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:38:59.248212 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:39:01.248492 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:39:01.741502 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:02.241599 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:02.741070 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:03.241152 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:03.741076 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:04.041996 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:39:04.111760 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:39:04.111812 1685746 retry.go:84] will retry after 10s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:39:04.242089 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:04.741350 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:05.241736 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:05.741098 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:06.241279 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:39:03.747982 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:39:05.748583 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:39:07.748998 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:39:06.742311 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:07.241927 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:07.741133 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:08.241157 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:08.532510 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:39:08.603273 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:39:08.603314 1685746 retry.go:84] will retry after 7.8s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:39:08.741625 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:09.241616 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:09.741180 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:09.845450 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:39:09.907468 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:39:10.242040 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:10.742004 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:11.242043 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:39:10.248934 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:39:12.748076 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:39:11.741028 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:12.241114 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:12.741779 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:13.241398 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:13.741757 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:14.084932 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:39:14.149870 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:39:14.149915 1685746 retry.go:84] will retry after 13.2s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:39:14.241288 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:14.742009 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:15.241500 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:15.741459 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:16.241659 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:16.395227 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:39:16.456949 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:39:14.748959 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:39:17.248674 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:39:16.741507 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:17.241459 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:17.741042 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:18.241111 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:18.741162 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:19.241875 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:19.741715 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:20.241732 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:20.741105 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:21.241347 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:39:19.748622 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:39:21.749035 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:39:21.741639 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:22.241911 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:22.742051 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:23.241970 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:23.741127 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:24.241560 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:24.741692 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:25.241106 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:25.741122 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:26.241137 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:39:24.248544 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:39:26.747990 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:39:26.741585 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:27.241155 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:27.301256 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:39:27.375517 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:39:27.375598 1685746 retry.go:84] will retry after 26.5s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:39:27.741076 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:28.241034 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:28.741642 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:29.226555 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 01:39:29.242011 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:39:29.291422 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:39:29.741622 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:30.245888 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:30.741105 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:31.241550 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:39:28.748186 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:39:30.748280 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:39:32.748600 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:39:31.741066 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:32.241183 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:32.741695 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:33.241134 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:33.741807 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:34.241685 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:34.741125 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:35.241915 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:35.741241 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:36.241639 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1222 01:39:35.249008 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:39:37.748582 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:39:36.741652 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:37.241141 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:37.741891 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:38.054310 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:39:38.118505 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:39:38.118547 1685746 retry.go:84] will retry after 47.2s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:39:38.241764 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:38.741459 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:39.241609 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:39:39.241696 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:39:39.269891 1685746 cri.go:96] found id: ""
	I1222 01:39:39.269914 1685746 logs.go:282] 0 containers: []
	W1222 01:39:39.269923 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:39:39.269930 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:39:39.269991 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:39:39.300389 1685746 cri.go:96] found id: ""
	I1222 01:39:39.300414 1685746 logs.go:282] 0 containers: []
	W1222 01:39:39.300423 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:39:39.300430 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:39:39.300501 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:39:39.326557 1685746 cri.go:96] found id: ""
	I1222 01:39:39.326582 1685746 logs.go:282] 0 containers: []
	W1222 01:39:39.326592 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:39:39.326598 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:39:39.326697 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:39:39.354049 1685746 cri.go:96] found id: ""
	I1222 01:39:39.354115 1685746 logs.go:282] 0 containers: []
	W1222 01:39:39.354125 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:39:39.354132 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:39:39.354202 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:39:39.380457 1685746 cri.go:96] found id: ""
	I1222 01:39:39.380490 1685746 logs.go:282] 0 containers: []
	W1222 01:39:39.380500 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:39:39.380507 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:39:39.380577 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:39:39.407039 1685746 cri.go:96] found id: ""
	I1222 01:39:39.407062 1685746 logs.go:282] 0 containers: []
	W1222 01:39:39.407070 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:39:39.407076 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:39:39.407139 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:39:39.431541 1685746 cri.go:96] found id: ""
	I1222 01:39:39.431568 1685746 logs.go:282] 0 containers: []
	W1222 01:39:39.431577 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:39:39.431584 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:39:39.431676 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:39:39.457555 1685746 cri.go:96] found id: ""
	I1222 01:39:39.457588 1685746 logs.go:282] 0 containers: []
	W1222 01:39:39.457607 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:39:39.457616 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:39:39.457629 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:39:39.517907 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:39:39.517997 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:39:39.534348 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:39:39.534373 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:39:39.607407 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:39:39.598204    1848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:39.599085    1848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:39.600659    1848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:39.601166    1848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:39.602826    1848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:39:39.598204    1848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:39.599085    1848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:39.600659    1848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:39.601166    1848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:39.602826    1848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:39:39.607438 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:39:39.607463 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:39:39.634050 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:39:39.634094 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 01:39:40.248054 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:39:42.748083 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:39:42.163786 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:42.176868 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:39:42.176959 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:39:42.208642 1685746 cri.go:96] found id: ""
	I1222 01:39:42.208672 1685746 logs.go:282] 0 containers: []
	W1222 01:39:42.208682 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:39:42.208688 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:39:42.208757 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:39:42.249523 1685746 cri.go:96] found id: ""
	I1222 01:39:42.249552 1685746 logs.go:282] 0 containers: []
	W1222 01:39:42.249562 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:39:42.249569 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:39:42.249641 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:39:42.283515 1685746 cri.go:96] found id: ""
	I1222 01:39:42.283542 1685746 logs.go:282] 0 containers: []
	W1222 01:39:42.283550 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:39:42.283557 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:39:42.283659 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:39:42.312237 1685746 cri.go:96] found id: ""
	I1222 01:39:42.312260 1685746 logs.go:282] 0 containers: []
	W1222 01:39:42.312269 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:39:42.312276 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:39:42.312335 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:39:42.341269 1685746 cri.go:96] found id: ""
	I1222 01:39:42.341297 1685746 logs.go:282] 0 containers: []
	W1222 01:39:42.341306 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:39:42.341312 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:39:42.341374 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:39:42.367696 1685746 cri.go:96] found id: ""
	I1222 01:39:42.367723 1685746 logs.go:282] 0 containers: []
	W1222 01:39:42.367732 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:39:42.367739 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:39:42.367804 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:39:42.396577 1685746 cri.go:96] found id: ""
	I1222 01:39:42.396602 1685746 logs.go:282] 0 containers: []
	W1222 01:39:42.396612 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:39:42.396618 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:39:42.396689 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:39:42.426348 1685746 cri.go:96] found id: ""
	I1222 01:39:42.426380 1685746 logs.go:282] 0 containers: []
	W1222 01:39:42.426392 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:39:42.426413 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:39:42.426433 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:39:42.481969 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:39:42.482005 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:39:42.499357 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:39:42.499436 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:39:42.576627 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:39:42.568338    1966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:42.569235    1966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:42.570839    1966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:42.571221    1966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:42.572936    1966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:39:42.568338    1966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:42.569235    1966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:42.570839    1966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:42.571221    1966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:42.572936    1966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:39:42.576649 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:39:42.576663 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:39:42.601751 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:39:42.601784 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:39:45.131239 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:45.157288 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:39:45.157379 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:39:45.207917 1685746 cri.go:96] found id: ""
	I1222 01:39:45.207953 1685746 logs.go:282] 0 containers: []
	W1222 01:39:45.207963 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:39:45.207975 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:39:45.208042 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:39:45.255413 1685746 cri.go:96] found id: ""
	I1222 01:39:45.255448 1685746 logs.go:282] 0 containers: []
	W1222 01:39:45.255459 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:39:45.255467 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:39:45.255564 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:39:45.300163 1685746 cri.go:96] found id: ""
	I1222 01:39:45.300196 1685746 logs.go:282] 0 containers: []
	W1222 01:39:45.300206 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:39:45.300214 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:39:45.300285 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:39:45.348918 1685746 cri.go:96] found id: ""
	I1222 01:39:45.348943 1685746 logs.go:282] 0 containers: []
	W1222 01:39:45.348952 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:39:45.348959 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:39:45.349022 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:39:45.379477 1685746 cri.go:96] found id: ""
	I1222 01:39:45.379502 1685746 logs.go:282] 0 containers: []
	W1222 01:39:45.379512 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:39:45.379518 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:39:45.379580 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:39:45.410514 1685746 cri.go:96] found id: ""
	I1222 01:39:45.410535 1685746 logs.go:282] 0 containers: []
	W1222 01:39:45.410543 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:39:45.410550 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:39:45.410611 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:39:45.436661 1685746 cri.go:96] found id: ""
	I1222 01:39:45.436686 1685746 logs.go:282] 0 containers: []
	W1222 01:39:45.436695 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:39:45.436702 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:39:45.436769 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:39:45.466972 1685746 cri.go:96] found id: ""
	I1222 01:39:45.467001 1685746 logs.go:282] 0 containers: []
	W1222 01:39:45.467010 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:39:45.467019 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:39:45.467032 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:39:45.567688 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:39:45.558837    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:45.559539    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:45.561202    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:45.561740    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:45.563191    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:39:45.558837    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:45.559539    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:45.561202    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:45.561740    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:45.563191    2069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:39:45.567712 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:39:45.567731 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:39:45.593712 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:39:45.593757 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:39:45.626150 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:39:45.626179 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:39:45.681273 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:39:45.681310 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1222 01:39:44.748908 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:39:47.248090 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:39:48.196684 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:48.207640 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:39:48.207718 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:39:48.232650 1685746 cri.go:96] found id: ""
	I1222 01:39:48.232680 1685746 logs.go:282] 0 containers: []
	W1222 01:39:48.232688 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:39:48.232708 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:39:48.232772 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:39:48.264801 1685746 cri.go:96] found id: ""
	I1222 01:39:48.264831 1685746 logs.go:282] 0 containers: []
	W1222 01:39:48.264841 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:39:48.264848 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:39:48.264915 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:39:48.300270 1685746 cri.go:96] found id: ""
	I1222 01:39:48.300300 1685746 logs.go:282] 0 containers: []
	W1222 01:39:48.300310 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:39:48.300317 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:39:48.300388 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:39:48.334711 1685746 cri.go:96] found id: ""
	I1222 01:39:48.334782 1685746 logs.go:282] 0 containers: []
	W1222 01:39:48.334806 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:39:48.334821 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:39:48.334898 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:39:48.359955 1685746 cri.go:96] found id: ""
	I1222 01:39:48.360023 1685746 logs.go:282] 0 containers: []
	W1222 01:39:48.360038 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:39:48.360052 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:39:48.360124 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:39:48.386551 1685746 cri.go:96] found id: ""
	I1222 01:39:48.386574 1685746 logs.go:282] 0 containers: []
	W1222 01:39:48.386583 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:39:48.386589 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:39:48.386648 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:39:48.412026 1685746 cri.go:96] found id: ""
	I1222 01:39:48.412052 1685746 logs.go:282] 0 containers: []
	W1222 01:39:48.412062 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:39:48.412069 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:39:48.412129 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:39:48.440847 1685746 cri.go:96] found id: ""
	I1222 01:39:48.440870 1685746 logs.go:282] 0 containers: []
	W1222 01:39:48.440878 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:39:48.440887 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:39:48.440897 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:39:48.496591 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:39:48.496673 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:39:48.512755 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:39:48.512834 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:39:48.596174 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:39:48.587854    2187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:48.588773    2187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:48.590542    2187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:48.590879    2187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:48.592406    2187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:39:48.587854    2187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:48.588773    2187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:48.590542    2187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:48.590879    2187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:48.592406    2187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:39:48.596249 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:39:48.596281 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:39:48.621362 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:39:48.621397 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:39:51.155431 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:51.169542 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:39:51.169616 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:39:51.195476 1685746 cri.go:96] found id: ""
	I1222 01:39:51.195500 1685746 logs.go:282] 0 containers: []
	W1222 01:39:51.195509 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:39:51.195516 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:39:51.195585 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:39:51.220215 1685746 cri.go:96] found id: ""
	I1222 01:39:51.220240 1685746 logs.go:282] 0 containers: []
	W1222 01:39:51.220249 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:39:51.220255 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:39:51.220324 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:39:51.248478 1685746 cri.go:96] found id: ""
	I1222 01:39:51.248508 1685746 logs.go:282] 0 containers: []
	W1222 01:39:51.248527 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:39:51.248534 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:39:51.248594 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:39:51.282587 1685746 cri.go:96] found id: ""
	I1222 01:39:51.282615 1685746 logs.go:282] 0 containers: []
	W1222 01:39:51.282624 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:39:51.282630 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:39:51.282691 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:39:51.310999 1685746 cri.go:96] found id: ""
	I1222 01:39:51.311029 1685746 logs.go:282] 0 containers: []
	W1222 01:39:51.311038 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:39:51.311044 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:39:51.311105 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:39:51.338337 1685746 cri.go:96] found id: ""
	I1222 01:39:51.338414 1685746 logs.go:282] 0 containers: []
	W1222 01:39:51.338431 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:39:51.338438 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:39:51.338517 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:39:51.365554 1685746 cri.go:96] found id: ""
	I1222 01:39:51.365582 1685746 logs.go:282] 0 containers: []
	W1222 01:39:51.365591 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:39:51.365598 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:39:51.365656 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:39:51.389874 1685746 cri.go:96] found id: ""
	I1222 01:39:51.389903 1685746 logs.go:282] 0 containers: []
	W1222 01:39:51.389913 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:39:51.389922 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:39:51.389933 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:39:51.449732 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:39:51.449797 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:39:51.467573 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:39:51.467669 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:39:51.568437 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:39:51.561021    2299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:51.561697    2299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:51.563323    2299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:51.563918    2299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:51.564974    2299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:39:51.561021    2299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:51.561697    2299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:51.563323    2299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:51.563918    2299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:51.564974    2299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:39:51.568512 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:39:51.568561 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:39:51.595758 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:39:51.595841 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 01:39:49.249046 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:39:51.748032 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:39:53.905270 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 01:39:53.968241 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:39:53.968406 1685746 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1222 01:39:54.129563 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:54.143910 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:39:54.144012 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:39:54.169973 1685746 cri.go:96] found id: ""
	I1222 01:39:54.170009 1685746 logs.go:282] 0 containers: []
	W1222 01:39:54.170018 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:39:54.170042 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:39:54.170158 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:39:54.198811 1685746 cri.go:96] found id: ""
	I1222 01:39:54.198838 1685746 logs.go:282] 0 containers: []
	W1222 01:39:54.198847 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:39:54.198854 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:39:54.198917 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:39:54.224425 1685746 cri.go:96] found id: ""
	I1222 01:39:54.224452 1685746 logs.go:282] 0 containers: []
	W1222 01:39:54.224462 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:39:54.224468 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:39:54.224549 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:39:54.273957 1685746 cri.go:96] found id: ""
	I1222 01:39:54.273983 1685746 logs.go:282] 0 containers: []
	W1222 01:39:54.273992 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:39:54.273998 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:39:54.274059 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:39:54.306801 1685746 cri.go:96] found id: ""
	I1222 01:39:54.306826 1685746 logs.go:282] 0 containers: []
	W1222 01:39:54.306836 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:39:54.306842 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:39:54.306916 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:39:54.339513 1685746 cri.go:96] found id: ""
	I1222 01:39:54.339539 1685746 logs.go:282] 0 containers: []
	W1222 01:39:54.339548 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:39:54.339555 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:39:54.339617 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:39:54.365259 1685746 cri.go:96] found id: ""
	I1222 01:39:54.365285 1685746 logs.go:282] 0 containers: []
	W1222 01:39:54.365295 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:39:54.365301 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:39:54.365363 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:39:54.390271 1685746 cri.go:96] found id: ""
	I1222 01:39:54.390294 1685746 logs.go:282] 0 containers: []
	W1222 01:39:54.390303 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:39:54.390312 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:39:54.390324 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:39:54.445696 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:39:54.445728 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:39:54.460676 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:39:54.460751 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:39:54.537038 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:39:54.528481    2418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:54.529359    2418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:54.531148    2418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:54.531443    2418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:54.533489    2418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:39:54.528481    2418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:54.529359    2418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:54.531148    2418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:54.531443    2418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:54.533489    2418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:39:54.537060 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:39:54.537075 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:39:54.566201 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:39:54.566234 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 01:39:53.749035 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:39:56.248725 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:39:57.093953 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:39:57.104681 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:39:57.104755 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:39:57.132428 1685746 cri.go:96] found id: ""
	I1222 01:39:57.132455 1685746 logs.go:282] 0 containers: []
	W1222 01:39:57.132465 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:39:57.132472 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:39:57.132532 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:39:57.158487 1685746 cri.go:96] found id: ""
	I1222 01:39:57.158512 1685746 logs.go:282] 0 containers: []
	W1222 01:39:57.158521 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:39:57.158528 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:39:57.158589 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:39:57.184175 1685746 cri.go:96] found id: ""
	I1222 01:39:57.184203 1685746 logs.go:282] 0 containers: []
	W1222 01:39:57.184213 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:39:57.184219 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:39:57.184279 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:39:57.215724 1685746 cri.go:96] found id: ""
	I1222 01:39:57.215752 1685746 logs.go:282] 0 containers: []
	W1222 01:39:57.215761 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:39:57.215768 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:39:57.215830 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:39:57.252375 1685746 cri.go:96] found id: ""
	I1222 01:39:57.252408 1685746 logs.go:282] 0 containers: []
	W1222 01:39:57.252420 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:39:57.252427 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:39:57.252499 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:39:57.291286 1685746 cri.go:96] found id: ""
	I1222 01:39:57.291323 1685746 logs.go:282] 0 containers: []
	W1222 01:39:57.291333 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:39:57.291344 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:39:57.291408 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:39:57.322496 1685746 cri.go:96] found id: ""
	I1222 01:39:57.322577 1685746 logs.go:282] 0 containers: []
	W1222 01:39:57.322594 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:39:57.322602 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:39:57.322678 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:39:57.352695 1685746 cri.go:96] found id: ""
	I1222 01:39:57.352722 1685746 logs.go:282] 0 containers: []
	W1222 01:39:57.352731 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:39:57.352741 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:39:57.352754 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:39:57.410232 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:39:57.410271 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:39:57.425451 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:39:57.425481 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:39:57.498123 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:39:57.489260    2529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:57.490135    2529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:57.491903    2529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:57.492235    2529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:57.493977    2529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:39:57.489260    2529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:57.490135    2529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:57.491903    2529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:57.492235    2529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:39:57.493977    2529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:39:57.498197 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:39:57.498226 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:39:57.530586 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:39:57.530677 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:40:00.062361 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:00.152699 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:00.152784 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:00.243584 1685746 cri.go:96] found id: ""
	I1222 01:40:00.243618 1685746 logs.go:282] 0 containers: []
	W1222 01:40:00.243635 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:00.243645 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:00.243728 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:00.323644 1685746 cri.go:96] found id: ""
	I1222 01:40:00.323704 1685746 logs.go:282] 0 containers: []
	W1222 01:40:00.323720 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:00.323730 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:00.323805 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:00.411473 1685746 cri.go:96] found id: ""
	I1222 01:40:00.411502 1685746 logs.go:282] 0 containers: []
	W1222 01:40:00.411521 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:00.411532 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:00.411621 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:00.511894 1685746 cri.go:96] found id: ""
	I1222 01:40:00.511922 1685746 logs.go:282] 0 containers: []
	W1222 01:40:00.511933 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:00.511941 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:00.512015 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:00.575706 1685746 cri.go:96] found id: ""
	I1222 01:40:00.575736 1685746 logs.go:282] 0 containers: []
	W1222 01:40:00.575746 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:00.575753 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:00.575828 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:00.666886 1685746 cri.go:96] found id: ""
	I1222 01:40:00.666913 1685746 logs.go:282] 0 containers: []
	W1222 01:40:00.666922 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:00.666929 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:00.667011 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:00.704456 1685746 cri.go:96] found id: ""
	I1222 01:40:00.704490 1685746 logs.go:282] 0 containers: []
	W1222 01:40:00.704499 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:00.704513 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:00.704583 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:00.763369 1685746 cri.go:96] found id: ""
	I1222 01:40:00.763404 1685746 logs.go:282] 0 containers: []
	W1222 01:40:00.763415 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:00.763425 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:00.763439 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:00.822507 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:00.822546 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:40:00.839492 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:00.839529 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:00.911350 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:00.902540    2639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:00.903387    2639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:00.905036    2639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:00.905557    2639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:00.907072    2639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:00.902540    2639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:00.903387    2639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:00.905036    2639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:00.905557    2639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:00.907072    2639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:00.911374 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:00.911389 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:40:00.937901 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:00.937953 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:40:01.674108 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:39:58.748290 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:40:00.756406 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:40:01.748211 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:40:01.748257 1685746 retry.go:84] will retry after 28.8s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 01:40:03.469297 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:03.480071 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:03.480145 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:03.519512 1685746 cri.go:96] found id: ""
	I1222 01:40:03.519627 1685746 logs.go:282] 0 containers: []
	W1222 01:40:03.519661 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:03.519709 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:03.520078 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:03.555737 1685746 cri.go:96] found id: ""
	I1222 01:40:03.555763 1685746 logs.go:282] 0 containers: []
	W1222 01:40:03.555806 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:03.555819 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:03.555909 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:03.580955 1685746 cri.go:96] found id: ""
	I1222 01:40:03.580986 1685746 logs.go:282] 0 containers: []
	W1222 01:40:03.580995 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:03.581004 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:03.581068 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:03.610855 1685746 cri.go:96] found id: ""
	I1222 01:40:03.610935 1685746 logs.go:282] 0 containers: []
	W1222 01:40:03.610952 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:03.610961 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:03.611037 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:03.635994 1685746 cri.go:96] found id: ""
	I1222 01:40:03.636019 1685746 logs.go:282] 0 containers: []
	W1222 01:40:03.636027 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:03.636033 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:03.636103 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:03.661008 1685746 cri.go:96] found id: ""
	I1222 01:40:03.661086 1685746 logs.go:282] 0 containers: []
	W1222 01:40:03.661109 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:03.661132 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:03.661249 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:03.685551 1685746 cri.go:96] found id: ""
	I1222 01:40:03.685577 1685746 logs.go:282] 0 containers: []
	W1222 01:40:03.685586 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:03.685594 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:03.685653 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:03.710025 1685746 cri.go:96] found id: ""
	I1222 01:40:03.710054 1685746 logs.go:282] 0 containers: []
	W1222 01:40:03.710063 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:03.710073 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:03.710109 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:40:03.748992 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:03.749066 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:03.812952 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:03.812990 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:40:03.828176 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:03.828207 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:03.895557 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:03.885860    2768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:03.886437    2768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:03.889658    2768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:03.890260    2768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:03.891906    2768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:03.885860    2768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:03.886437    2768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:03.889658    2768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:03.890260    2768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:03.891906    2768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:03.895583 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:03.895596 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:40:06.421124 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:06.432321 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:06.432435 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:06.458845 1685746 cri.go:96] found id: ""
	I1222 01:40:06.458926 1685746 logs.go:282] 0 containers: []
	W1222 01:40:06.458944 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:06.458951 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:06.459024 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:06.483853 1685746 cri.go:96] found id: ""
	I1222 01:40:06.483881 1685746 logs.go:282] 0 containers: []
	W1222 01:40:06.483890 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:06.483897 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:06.483956 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:06.518710 1685746 cri.go:96] found id: ""
	I1222 01:40:06.518741 1685746 logs.go:282] 0 containers: []
	W1222 01:40:06.518750 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:06.518757 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:06.518821 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:06.549152 1685746 cri.go:96] found id: ""
	I1222 01:40:06.549183 1685746 logs.go:282] 0 containers: []
	W1222 01:40:06.549191 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:06.549198 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:06.549256 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:06.579003 1685746 cri.go:96] found id: ""
	I1222 01:40:06.579032 1685746 logs.go:282] 0 containers: []
	W1222 01:40:06.579041 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:06.579048 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:06.579110 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:06.614999 1685746 cri.go:96] found id: ""
	I1222 01:40:06.615029 1685746 logs.go:282] 0 containers: []
	W1222 01:40:06.615038 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:06.615045 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:06.615109 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:06.644049 1685746 cri.go:96] found id: ""
	I1222 01:40:06.644073 1685746 logs.go:282] 0 containers: []
	W1222 01:40:06.644082 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:06.644088 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:06.644150 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:06.670551 1685746 cri.go:96] found id: ""
	I1222 01:40:06.670580 1685746 logs.go:282] 0 containers: []
	W1222 01:40:06.670590 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:06.670599 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:06.670630 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1222 01:40:03.248649 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:40:05.249130 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:40:07.749016 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:40:06.696127 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:06.696164 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:40:06.728583 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:06.728612 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:06.788068 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:06.788103 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:40:06.805676 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:06.805708 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:06.875097 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:06.866896    2880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:06.867468    2880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:06.869095    2880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:06.869679    2880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:06.871268    2880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:06.866896    2880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:06.867468    2880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:06.869095    2880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:06.869679    2880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:06.871268    2880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:09.375863 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:09.386805 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:09.386883 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:09.413272 1685746 cri.go:96] found id: ""
	I1222 01:40:09.413299 1685746 logs.go:282] 0 containers: []
	W1222 01:40:09.413307 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:09.413313 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:09.413374 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:09.438591 1685746 cri.go:96] found id: ""
	I1222 01:40:09.438615 1685746 logs.go:282] 0 containers: []
	W1222 01:40:09.438623 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:09.438630 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:09.438692 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:09.463919 1685746 cri.go:96] found id: ""
	I1222 01:40:09.463943 1685746 logs.go:282] 0 containers: []
	W1222 01:40:09.463952 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:09.463959 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:09.464026 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:09.493604 1685746 cri.go:96] found id: ""
	I1222 01:40:09.493627 1685746 logs.go:282] 0 containers: []
	W1222 01:40:09.493641 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:09.493648 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:09.493707 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:09.529370 1685746 cri.go:96] found id: ""
	I1222 01:40:09.529394 1685746 logs.go:282] 0 containers: []
	W1222 01:40:09.529404 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:09.529411 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:09.529477 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:09.562121 1685746 cri.go:96] found id: ""
	I1222 01:40:09.562150 1685746 logs.go:282] 0 containers: []
	W1222 01:40:09.562160 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:09.562167 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:09.562233 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:09.587896 1685746 cri.go:96] found id: ""
	I1222 01:40:09.587924 1685746 logs.go:282] 0 containers: []
	W1222 01:40:09.587935 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:09.587942 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:09.588010 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:09.613576 1685746 cri.go:96] found id: ""
	I1222 01:40:09.613600 1685746 logs.go:282] 0 containers: []
	W1222 01:40:09.613609 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:09.613619 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:09.613630 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:09.671590 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:09.671627 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:40:09.688438 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:09.688468 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:09.770484 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:09.761524    2979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:09.762731    2979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:09.764535    2979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:09.764916    2979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:09.766502    2979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:09.761524    2979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:09.762731    2979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:09.764535    2979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:09.764916    2979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:09.766502    2979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:09.770797 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:09.770834 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:40:09.803134 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:09.803237 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 01:40:10.247989 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:40:12.248610 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:40:12.334803 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:12.345660 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:12.345780 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:12.375026 1685746 cri.go:96] found id: ""
	I1222 01:40:12.375056 1685746 logs.go:282] 0 containers: []
	W1222 01:40:12.375067 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:12.375075 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:12.375154 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:12.400255 1685746 cri.go:96] found id: ""
	I1222 01:40:12.400282 1685746 logs.go:282] 0 containers: []
	W1222 01:40:12.400291 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:12.400299 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:12.400402 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:12.425430 1685746 cri.go:96] found id: ""
	I1222 01:40:12.425458 1685746 logs.go:282] 0 containers: []
	W1222 01:40:12.425467 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:12.425474 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:12.425535 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:12.450734 1685746 cri.go:96] found id: ""
	I1222 01:40:12.450816 1685746 logs.go:282] 0 containers: []
	W1222 01:40:12.450832 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:12.450841 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:12.450918 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:12.477690 1685746 cri.go:96] found id: ""
	I1222 01:40:12.477719 1685746 logs.go:282] 0 containers: []
	W1222 01:40:12.477735 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:12.477742 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:12.477803 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:12.517751 1685746 cri.go:96] found id: ""
	I1222 01:40:12.517779 1685746 logs.go:282] 0 containers: []
	W1222 01:40:12.517787 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:12.517794 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:12.517858 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:12.544749 1685746 cri.go:96] found id: ""
	I1222 01:40:12.544777 1685746 logs.go:282] 0 containers: []
	W1222 01:40:12.544786 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:12.544793 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:12.544858 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:12.576758 1685746 cri.go:96] found id: ""
	I1222 01:40:12.576786 1685746 logs.go:282] 0 containers: []
	W1222 01:40:12.576795 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:12.576805 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:12.576816 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:40:12.592450 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:12.592478 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:12.658073 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:12.649657    3090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:12.650391    3090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:12.652114    3090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:12.652710    3090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:12.654319    3090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:12.649657    3090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:12.650391    3090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:12.652114    3090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:12.652710    3090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:12.654319    3090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:12.658125 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:12.658138 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:40:12.683599 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:12.683637 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:40:12.715675 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:12.715707 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:15.275108 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:15.285651 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:15.285724 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:15.311249 1685746 cri.go:96] found id: ""
	I1222 01:40:15.311277 1685746 logs.go:282] 0 containers: []
	W1222 01:40:15.311287 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:15.311293 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:15.311353 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:15.336192 1685746 cri.go:96] found id: ""
	I1222 01:40:15.336218 1685746 logs.go:282] 0 containers: []
	W1222 01:40:15.336226 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:15.336234 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:15.336297 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:15.362231 1685746 cri.go:96] found id: ""
	I1222 01:40:15.362254 1685746 logs.go:282] 0 containers: []
	W1222 01:40:15.362263 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:15.362269 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:15.362331 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:15.390149 1685746 cri.go:96] found id: ""
	I1222 01:40:15.390176 1685746 logs.go:282] 0 containers: []
	W1222 01:40:15.390185 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:15.390192 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:15.390259 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:15.417421 1685746 cri.go:96] found id: ""
	I1222 01:40:15.417446 1685746 logs.go:282] 0 containers: []
	W1222 01:40:15.417456 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:15.417464 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:15.417530 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:15.444318 1685746 cri.go:96] found id: ""
	I1222 01:40:15.444346 1685746 logs.go:282] 0 containers: []
	W1222 01:40:15.444356 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:15.444368 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:15.444428 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:15.469475 1685746 cri.go:96] found id: ""
	I1222 01:40:15.469503 1685746 logs.go:282] 0 containers: []
	W1222 01:40:15.469512 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:15.469520 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:15.469581 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:15.501561 1685746 cri.go:96] found id: ""
	I1222 01:40:15.501588 1685746 logs.go:282] 0 containers: []
	W1222 01:40:15.501597 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:15.501606 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:15.501637 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:40:15.518032 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:15.518062 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:15.588024 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:15.580432    3204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:15.581037    3204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:15.582843    3204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:15.583305    3204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:15.584373    3204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:15.580432    3204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:15.581037    3204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:15.582843    3204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:15.583305    3204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:15.584373    3204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:15.588049 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:15.588062 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:40:15.613914 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:15.613953 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:40:15.645712 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:15.645739 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1222 01:40:14.747949 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:40:16.749012 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:40:18.200926 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:18.211578 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:18.211651 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:18.237396 1685746 cri.go:96] found id: ""
	I1222 01:40:18.237421 1685746 logs.go:282] 0 containers: []
	W1222 01:40:18.237429 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:18.237436 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:18.237503 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:18.264313 1685746 cri.go:96] found id: ""
	I1222 01:40:18.264345 1685746 logs.go:282] 0 containers: []
	W1222 01:40:18.264356 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:18.264369 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:18.264451 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:18.290240 1685746 cri.go:96] found id: ""
	I1222 01:40:18.290265 1685746 logs.go:282] 0 containers: []
	W1222 01:40:18.290274 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:18.290281 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:18.290340 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:18.315874 1685746 cri.go:96] found id: ""
	I1222 01:40:18.315898 1685746 logs.go:282] 0 containers: []
	W1222 01:40:18.315907 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:18.315914 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:18.315975 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:18.340813 1685746 cri.go:96] found id: ""
	I1222 01:40:18.340836 1685746 logs.go:282] 0 containers: []
	W1222 01:40:18.340844 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:18.340852 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:18.340912 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:18.368094 1685746 cri.go:96] found id: ""
	I1222 01:40:18.368119 1685746 logs.go:282] 0 containers: []
	W1222 01:40:18.368128 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:18.368135 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:18.368251 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:18.393525 1685746 cri.go:96] found id: ""
	I1222 01:40:18.393551 1685746 logs.go:282] 0 containers: []
	W1222 01:40:18.393559 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:18.393566 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:18.393629 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:18.419984 1685746 cri.go:96] found id: ""
	I1222 01:40:18.420011 1685746 logs.go:282] 0 containers: []
	W1222 01:40:18.420020 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:18.420031 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:18.420043 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:40:18.435061 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:18.435090 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:18.511216 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:18.502272    3309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:18.503063    3309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:18.504790    3309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:18.505424    3309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:18.507123    3309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:18.502272    3309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:18.503063    3309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:18.504790    3309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:18.505424    3309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:18.507123    3309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:18.511242 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:18.511258 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:40:18.539215 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:18.539253 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:40:18.571721 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:18.571752 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:21.133335 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:21.144470 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:21.144552 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:21.170402 1685746 cri.go:96] found id: ""
	I1222 01:40:21.170435 1685746 logs.go:282] 0 containers: []
	W1222 01:40:21.170444 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:21.170451 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:21.170514 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:21.197647 1685746 cri.go:96] found id: ""
	I1222 01:40:21.197674 1685746 logs.go:282] 0 containers: []
	W1222 01:40:21.197683 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:21.197690 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:21.197754 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:21.231085 1685746 cri.go:96] found id: ""
	I1222 01:40:21.231120 1685746 logs.go:282] 0 containers: []
	W1222 01:40:21.231130 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:21.231137 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:21.231243 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:21.268085 1685746 cri.go:96] found id: ""
	I1222 01:40:21.268112 1685746 logs.go:282] 0 containers: []
	W1222 01:40:21.268121 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:21.268129 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:21.268195 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:21.293752 1685746 cri.go:96] found id: ""
	I1222 01:40:21.293781 1685746 logs.go:282] 0 containers: []
	W1222 01:40:21.293791 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:21.293797 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:21.293864 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:21.320171 1685746 cri.go:96] found id: ""
	I1222 01:40:21.320195 1685746 logs.go:282] 0 containers: []
	W1222 01:40:21.320203 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:21.320210 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:21.320273 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:21.346069 1685746 cri.go:96] found id: ""
	I1222 01:40:21.346162 1685746 logs.go:282] 0 containers: []
	W1222 01:40:21.346177 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:21.346185 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:21.346246 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:21.371416 1685746 cri.go:96] found id: ""
	I1222 01:40:21.371443 1685746 logs.go:282] 0 containers: []
	W1222 01:40:21.371452 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:21.371462 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:21.371475 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:40:21.404674 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:21.404703 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:21.460348 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:21.460388 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:40:21.475958 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:21.475994 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:21.561495 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:21.552344    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:21.553051    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:21.555518    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:21.555905    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:21.557453    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:21.552344    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:21.553051    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:21.555518    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:21.555905    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:21.557453    3437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:21.561520 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:21.561533 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1222 01:40:19.248000 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:40:21.248090 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:40:24.089244 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:24.100814 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:24.100889 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:24.126847 1685746 cri.go:96] found id: ""
	I1222 01:40:24.126878 1685746 logs.go:282] 0 containers: []
	W1222 01:40:24.126888 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:24.126895 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:24.126959 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:24.152740 1685746 cri.go:96] found id: ""
	I1222 01:40:24.152768 1685746 logs.go:282] 0 containers: []
	W1222 01:40:24.152778 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:24.152784 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:24.152845 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:24.178506 1685746 cri.go:96] found id: ""
	I1222 01:40:24.178532 1685746 logs.go:282] 0 containers: []
	W1222 01:40:24.178540 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:24.178547 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:24.178628 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:24.210111 1685746 cri.go:96] found id: ""
	I1222 01:40:24.210138 1685746 logs.go:282] 0 containers: []
	W1222 01:40:24.210147 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:24.210156 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:24.210219 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:24.234336 1685746 cri.go:96] found id: ""
	I1222 01:40:24.234358 1685746 logs.go:282] 0 containers: []
	W1222 01:40:24.234372 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:24.234379 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:24.234440 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:24.259792 1685746 cri.go:96] found id: ""
	I1222 01:40:24.259861 1685746 logs.go:282] 0 containers: []
	W1222 01:40:24.259884 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:24.259898 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:24.259973 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:24.285594 1685746 cri.go:96] found id: ""
	I1222 01:40:24.285623 1685746 logs.go:282] 0 containers: []
	W1222 01:40:24.285632 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:24.285639 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:24.285722 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:24.312027 1685746 cri.go:96] found id: ""
	I1222 01:40:24.312055 1685746 logs.go:282] 0 containers: []
	W1222 01:40:24.312064 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:24.312074 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:24.312088 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:40:24.345845 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:24.345873 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:24.404101 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:24.404140 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:40:24.419436 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:24.419465 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:24.485147 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:24.477123    3551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:24.477548    3551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:24.479053    3551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:24.479387    3551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:24.480817    3551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:24.477123    3551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:24.477548    3551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:24.479053    3551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:24.479387    3551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:24.480817    3551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:24.485182 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:24.485195 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:40:25.275578 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 01:40:25.338578 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:40:25.338685 1685746 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1222 01:40:23.248426 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:40:25.748112 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:40:27.748979 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:40:27.016338 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:27.030615 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:27.030685 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:27.060751 1685746 cri.go:96] found id: ""
	I1222 01:40:27.060775 1685746 logs.go:282] 0 containers: []
	W1222 01:40:27.060784 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:27.060791 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:27.060850 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:27.088784 1685746 cri.go:96] found id: ""
	I1222 01:40:27.088807 1685746 logs.go:282] 0 containers: []
	W1222 01:40:27.088816 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:27.088822 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:27.088889 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:27.115559 1685746 cri.go:96] found id: ""
	I1222 01:40:27.115581 1685746 logs.go:282] 0 containers: []
	W1222 01:40:27.115590 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:27.115596 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:27.115658 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:27.141509 1685746 cri.go:96] found id: ""
	I1222 01:40:27.141579 1685746 logs.go:282] 0 containers: []
	W1222 01:40:27.141602 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:27.141624 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:27.141712 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:27.168944 1685746 cri.go:96] found id: ""
	I1222 01:40:27.168984 1685746 logs.go:282] 0 containers: []
	W1222 01:40:27.168993 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:27.169006 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:27.169076 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:27.194554 1685746 cri.go:96] found id: ""
	I1222 01:40:27.194584 1685746 logs.go:282] 0 containers: []
	W1222 01:40:27.194593 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:27.194599 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:27.194662 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:27.219603 1685746 cri.go:96] found id: ""
	I1222 01:40:27.219684 1685746 logs.go:282] 0 containers: []
	W1222 01:40:27.219707 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:27.219721 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:27.219801 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:27.246999 1685746 cri.go:96] found id: ""
	I1222 01:40:27.247033 1685746 logs.go:282] 0 containers: []
	W1222 01:40:27.247042 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:27.247067 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:27.247087 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:27.302977 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:27.303012 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:40:27.318364 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:27.318398 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:27.385339 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:27.376631    3659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:27.377224    3659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:27.379740    3659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:27.380579    3659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:27.381699    3659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:27.376631    3659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:27.377224    3659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:27.379740    3659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:27.380579    3659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:27.381699    3659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:27.385413 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:27.385442 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:40:27.411346 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:27.411384 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:40:29.941731 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:29.955808 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:29.955883 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:29.982684 1685746 cri.go:96] found id: ""
	I1222 01:40:29.982709 1685746 logs.go:282] 0 containers: []
	W1222 01:40:29.982718 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:29.982725 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:29.982796 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:30.036793 1685746 cri.go:96] found id: ""
	I1222 01:40:30.036836 1685746 logs.go:282] 0 containers: []
	W1222 01:40:30.036847 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:30.036858 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:30.036986 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:30.127706 1685746 cri.go:96] found id: ""
	I1222 01:40:30.127740 1685746 logs.go:282] 0 containers: []
	W1222 01:40:30.127750 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:30.127757 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:30.127828 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:30.158476 1685746 cri.go:96] found id: ""
	I1222 01:40:30.158509 1685746 logs.go:282] 0 containers: []
	W1222 01:40:30.158521 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:30.158529 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:30.158598 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:30.187425 1685746 cri.go:96] found id: ""
	I1222 01:40:30.187453 1685746 logs.go:282] 0 containers: []
	W1222 01:40:30.187463 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:30.187470 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:30.187539 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:30.216013 1685746 cri.go:96] found id: ""
	I1222 01:40:30.216043 1685746 logs.go:282] 0 containers: []
	W1222 01:40:30.216052 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:30.216060 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:30.216125 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:30.241947 1685746 cri.go:96] found id: ""
	I1222 01:40:30.241975 1685746 logs.go:282] 0 containers: []
	W1222 01:40:30.241985 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:30.241991 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:30.242074 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:30.271569 1685746 cri.go:96] found id: ""
	I1222 01:40:30.271595 1685746 logs.go:282] 0 containers: []
	W1222 01:40:30.271603 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:30.271613 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:30.271625 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:30.327858 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:30.327896 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:40:30.343479 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:30.343505 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:30.411657 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:30.402221    3771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:30.402895    3771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:30.404480    3771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:30.404791    3771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:30.407032    3771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:30.402221    3771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:30.402895    3771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:30.404480    3771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:30.404791    3771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:30.407032    3771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:30.411678 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:30.411692 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:40:30.436851 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:30.436886 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:40:30.511390 1685746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 01:40:30.582457 1685746 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 01:40:30.582560 1685746 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1222 01:40:30.587532 1685746 out.go:179] * Enabled addons: 
	I1222 01:40:30.590426 1685746 addons.go:530] duration metric: took 1m51.812167431s for enable addons: enabled=[]
	W1222 01:40:30.247997 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:40:32.248097 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:40:32.969406 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:32.980360 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:32.980444 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:33.016753 1685746 cri.go:96] found id: ""
	I1222 01:40:33.016778 1685746 logs.go:282] 0 containers: []
	W1222 01:40:33.016787 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:33.016795 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:33.016881 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:33.053288 1685746 cri.go:96] found id: ""
	I1222 01:40:33.053315 1685746 logs.go:282] 0 containers: []
	W1222 01:40:33.053334 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:33.053358 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:33.053457 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:33.087392 1685746 cri.go:96] found id: ""
	I1222 01:40:33.087417 1685746 logs.go:282] 0 containers: []
	W1222 01:40:33.087426 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:33.087432 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:33.087492 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:33.113564 1685746 cri.go:96] found id: ""
	I1222 01:40:33.113595 1685746 logs.go:282] 0 containers: []
	W1222 01:40:33.113604 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:33.113611 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:33.113698 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:33.143733 1685746 cri.go:96] found id: ""
	I1222 01:40:33.143757 1685746 logs.go:282] 0 containers: []
	W1222 01:40:33.143766 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:33.143772 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:33.143835 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:33.169776 1685746 cri.go:96] found id: ""
	I1222 01:40:33.169808 1685746 logs.go:282] 0 containers: []
	W1222 01:40:33.169816 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:33.169824 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:33.169887 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:33.198413 1685746 cri.go:96] found id: ""
	I1222 01:40:33.198438 1685746 logs.go:282] 0 containers: []
	W1222 01:40:33.198446 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:33.198453 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:33.198514 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:33.223746 1685746 cri.go:96] found id: ""
	I1222 01:40:33.223816 1685746 logs.go:282] 0 containers: []
	W1222 01:40:33.223838 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:33.223855 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:33.223866 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:40:33.249217 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:33.249247 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:40:33.282243 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:33.282269 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:33.340677 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:33.340714 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:40:33.355635 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:33.355667 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:33.438690 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:33.431030    3900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:33.431616    3900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:33.433134    3900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:33.433634    3900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:33.435107    3900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:33.431030    3900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:33.431616    3900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:33.433134    3900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:33.433634    3900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:33.435107    3900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:35.940454 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:35.954241 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:35.954312 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:35.979549 1685746 cri.go:96] found id: ""
	I1222 01:40:35.979576 1685746 logs.go:282] 0 containers: []
	W1222 01:40:35.979585 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:35.979592 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:35.979654 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:36.010177 1685746 cri.go:96] found id: ""
	I1222 01:40:36.010207 1685746 logs.go:282] 0 containers: []
	W1222 01:40:36.010217 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:36.010224 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:36.010295 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:36.045048 1685746 cri.go:96] found id: ""
	I1222 01:40:36.045078 1685746 logs.go:282] 0 containers: []
	W1222 01:40:36.045088 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:36.045095 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:36.045157 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:36.074866 1685746 cri.go:96] found id: ""
	I1222 01:40:36.074889 1685746 logs.go:282] 0 containers: []
	W1222 01:40:36.074897 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:36.074903 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:36.074965 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:36.101425 1685746 cri.go:96] found id: ""
	I1222 01:40:36.101499 1685746 logs.go:282] 0 containers: []
	W1222 01:40:36.101511 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:36.101518 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:36.106750 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:36.134167 1685746 cri.go:96] found id: ""
	I1222 01:40:36.134205 1685746 logs.go:282] 0 containers: []
	W1222 01:40:36.134215 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:36.134223 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:36.134288 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:36.159767 1685746 cri.go:96] found id: ""
	I1222 01:40:36.159792 1685746 logs.go:282] 0 containers: []
	W1222 01:40:36.159802 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:36.159809 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:36.159873 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:36.188878 1685746 cri.go:96] found id: ""
	I1222 01:40:36.188907 1685746 logs.go:282] 0 containers: []
	W1222 01:40:36.188917 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:36.188928 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:36.188941 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:36.253797 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:36.245184    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:36.246059    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:36.247790    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:36.248134    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:36.249657    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:36.245184    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:36.246059    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:36.247790    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:36.248134    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:36.249657    3993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:36.253877 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:36.253906 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:40:36.279371 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:36.279408 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:40:36.308866 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:36.308901 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:36.365568 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:36.365603 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1222 01:40:34.248867 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:40:36.748755 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:40:38.881766 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:38.892862 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:38.892944 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:38.919366 1685746 cri.go:96] found id: ""
	I1222 01:40:38.919399 1685746 logs.go:282] 0 containers: []
	W1222 01:40:38.919409 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:38.919421 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:38.919495 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:38.953015 1685746 cri.go:96] found id: ""
	I1222 01:40:38.953042 1685746 logs.go:282] 0 containers: []
	W1222 01:40:38.953051 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:38.953058 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:38.953121 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:38.979133 1685746 cri.go:96] found id: ""
	I1222 01:40:38.979158 1685746 logs.go:282] 0 containers: []
	W1222 01:40:38.979167 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:38.979173 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:38.979236 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:39.017688 1685746 cri.go:96] found id: ""
	I1222 01:40:39.017714 1685746 logs.go:282] 0 containers: []
	W1222 01:40:39.017724 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:39.017735 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:39.017797 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:39.056591 1685746 cri.go:96] found id: ""
	I1222 01:40:39.056614 1685746 logs.go:282] 0 containers: []
	W1222 01:40:39.056622 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:39.056629 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:39.056686 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:39.085085 1685746 cri.go:96] found id: ""
	I1222 01:40:39.085155 1685746 logs.go:282] 0 containers: []
	W1222 01:40:39.085177 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:39.085199 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:39.085296 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:39.114614 1685746 cri.go:96] found id: ""
	I1222 01:40:39.114640 1685746 logs.go:282] 0 containers: []
	W1222 01:40:39.114649 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:39.114656 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:39.114738 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:39.140466 1685746 cri.go:96] found id: ""
	I1222 01:40:39.140511 1685746 logs.go:282] 0 containers: []
	W1222 01:40:39.140520 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:39.140545 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:39.140564 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:39.208956 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:39.200655    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:39.201282    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:39.202774    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:39.203325    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:39.204797    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:39.200655    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:39.201282    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:39.202774    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:39.203325    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:39.204797    4105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:39.208979 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:39.208992 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:40:39.234396 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:39.234430 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:40:39.264983 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:39.265011 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:39.320138 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:39.320173 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1222 01:40:38.748943 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:40:41.248791 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:40:41.835978 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:41.846958 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:41.847061 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:41.872281 1685746 cri.go:96] found id: ""
	I1222 01:40:41.872307 1685746 logs.go:282] 0 containers: []
	W1222 01:40:41.872318 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:41.872324 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:41.872429 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:41.902068 1685746 cri.go:96] found id: ""
	I1222 01:40:41.902127 1685746 logs.go:282] 0 containers: []
	W1222 01:40:41.902137 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:41.902163 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:41.902275 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:41.936505 1685746 cri.go:96] found id: ""
	I1222 01:40:41.936535 1685746 logs.go:282] 0 containers: []
	W1222 01:40:41.936544 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:41.936550 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:41.936615 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:41.961446 1685746 cri.go:96] found id: ""
	I1222 01:40:41.961480 1685746 logs.go:282] 0 containers: []
	W1222 01:40:41.961489 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:41.961496 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:41.961569 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:41.989500 1685746 cri.go:96] found id: ""
	I1222 01:40:41.989582 1685746 logs.go:282] 0 containers: []
	W1222 01:40:41.989606 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:41.989631 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:41.989730 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:42.028918 1685746 cri.go:96] found id: ""
	I1222 01:40:42.028947 1685746 logs.go:282] 0 containers: []
	W1222 01:40:42.028956 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:42.028963 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:42.029037 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:42.065570 1685746 cri.go:96] found id: ""
	I1222 01:40:42.065618 1685746 logs.go:282] 0 containers: []
	W1222 01:40:42.065633 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:42.065641 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:42.065724 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:42.095634 1685746 cri.go:96] found id: ""
	I1222 01:40:42.095661 1685746 logs.go:282] 0 containers: []
	W1222 01:40:42.095671 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:42.095681 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:42.095702 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:42.158126 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:42.158170 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:40:42.175600 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:42.175640 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:42.256856 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:42.248823    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:42.249305    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:42.250890    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:42.251275    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:42.252835    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:42.248823    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:42.249305    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:42.250890    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:42.251275    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:42.252835    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:42.256882 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:42.256896 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:40:42.283618 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:42.283665 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:40:44.813189 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:44.824766 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:44.824836 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:44.853167 1685746 cri.go:96] found id: ""
	I1222 01:40:44.853192 1685746 logs.go:282] 0 containers: []
	W1222 01:40:44.853201 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:44.853208 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:44.853269 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:44.878679 1685746 cri.go:96] found id: ""
	I1222 01:40:44.878711 1685746 logs.go:282] 0 containers: []
	W1222 01:40:44.878721 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:44.878728 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:44.878792 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:44.905070 1685746 cri.go:96] found id: ""
	I1222 01:40:44.905097 1685746 logs.go:282] 0 containers: []
	W1222 01:40:44.905106 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:44.905113 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:44.905177 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:44.930494 1685746 cri.go:96] found id: ""
	I1222 01:40:44.930523 1685746 logs.go:282] 0 containers: []
	W1222 01:40:44.930533 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:44.930539 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:44.930599 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:44.960159 1685746 cri.go:96] found id: ""
	I1222 01:40:44.960187 1685746 logs.go:282] 0 containers: []
	W1222 01:40:44.960196 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:44.960203 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:44.960308 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:44.985038 1685746 cri.go:96] found id: ""
	I1222 01:40:44.985066 1685746 logs.go:282] 0 containers: []
	W1222 01:40:44.985076 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:44.985083 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:44.985147 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:45.046474 1685746 cri.go:96] found id: ""
	I1222 01:40:45.046501 1685746 logs.go:282] 0 containers: []
	W1222 01:40:45.046511 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:45.046518 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:45.046590 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:45.111231 1685746 cri.go:96] found id: ""
	I1222 01:40:45.111266 1685746 logs.go:282] 0 containers: []
	W1222 01:40:45.111275 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:45.111286 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:45.111299 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:45.180293 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:45.180418 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:40:45.231743 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:45.231786 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:45.318004 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:45.307287    4338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:45.308090    4338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:45.309847    4338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:45.310539    4338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:45.312128    4338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:45.307287    4338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:45.308090    4338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:45.309847    4338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:45.310539    4338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:45.312128    4338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:45.318031 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:45.318045 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:40:45.351434 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:45.351474 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 01:40:43.748820 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:40:45.748974 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:40:47.885492 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:47.896303 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:47.896380 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:47.927221 1685746 cri.go:96] found id: ""
	I1222 01:40:47.927247 1685746 logs.go:282] 0 containers: []
	W1222 01:40:47.927257 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:47.927264 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:47.927326 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:47.955055 1685746 cri.go:96] found id: ""
	I1222 01:40:47.955082 1685746 logs.go:282] 0 containers: []
	W1222 01:40:47.955091 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:47.955098 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:47.955167 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:47.982730 1685746 cri.go:96] found id: ""
	I1222 01:40:47.982760 1685746 logs.go:282] 0 containers: []
	W1222 01:40:47.982770 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:47.982777 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:47.982841 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:48.013060 1685746 cri.go:96] found id: ""
	I1222 01:40:48.013093 1685746 logs.go:282] 0 containers: []
	W1222 01:40:48.013104 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:48.013111 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:48.013184 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:48.044824 1685746 cri.go:96] found id: ""
	I1222 01:40:48.044902 1685746 logs.go:282] 0 containers: []
	W1222 01:40:48.044918 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:48.044926 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:48.044994 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:48.077777 1685746 cri.go:96] found id: ""
	I1222 01:40:48.077806 1685746 logs.go:282] 0 containers: []
	W1222 01:40:48.077816 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:48.077822 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:48.077887 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:48.108631 1685746 cri.go:96] found id: ""
	I1222 01:40:48.108659 1685746 logs.go:282] 0 containers: []
	W1222 01:40:48.108669 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:48.108676 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:48.108767 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:48.135002 1685746 cri.go:96] found id: ""
	I1222 01:40:48.135035 1685746 logs.go:282] 0 containers: []
	W1222 01:40:48.135045 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:48.135056 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:48.135092 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:48.192262 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:48.192299 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:40:48.207972 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:48.208074 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:48.295537 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:48.286431    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:48.287217    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:48.288971    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:48.289714    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:48.290868    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:48.286431    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:48.287217    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:48.288971    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:48.289714    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:48.290868    4457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:48.295563 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:48.295583 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:40:48.322629 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:48.322665 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:40:50.857236 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:50.868315 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:50.868396 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:50.894289 1685746 cri.go:96] found id: ""
	I1222 01:40:50.894337 1685746 logs.go:282] 0 containers: []
	W1222 01:40:50.894346 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:50.894353 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:50.894414 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:50.920265 1685746 cri.go:96] found id: ""
	I1222 01:40:50.920288 1685746 logs.go:282] 0 containers: []
	W1222 01:40:50.920297 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:50.920303 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:50.920362 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:50.946413 1685746 cri.go:96] found id: ""
	I1222 01:40:50.946437 1685746 logs.go:282] 0 containers: []
	W1222 01:40:50.946445 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:50.946452 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:50.946511 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:50.973167 1685746 cri.go:96] found id: ""
	I1222 01:40:50.973192 1685746 logs.go:282] 0 containers: []
	W1222 01:40:50.973202 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:50.973209 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:50.973278 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:50.998695 1685746 cri.go:96] found id: ""
	I1222 01:40:50.998730 1685746 logs.go:282] 0 containers: []
	W1222 01:40:50.998739 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:50.998746 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:50.998812 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:51.027679 1685746 cri.go:96] found id: ""
	I1222 01:40:51.027748 1685746 logs.go:282] 0 containers: []
	W1222 01:40:51.027770 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:51.027792 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:51.027882 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:51.057709 1685746 cri.go:96] found id: ""
	I1222 01:40:51.057791 1685746 logs.go:282] 0 containers: []
	W1222 01:40:51.057816 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:51.057839 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:51.057933 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:51.085239 1685746 cri.go:96] found id: ""
	I1222 01:40:51.085311 1685746 logs.go:282] 0 containers: []
	W1222 01:40:51.085335 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:51.085361 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:51.085402 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:51.143088 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:51.143131 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:40:51.159838 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:51.159866 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:51.229894 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:51.221124    4573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:51.221892    4573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:51.223547    4573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:51.224109    4573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:51.225453    4573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:51.221124    4573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:51.221892    4573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:51.223547    4573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:51.224109    4573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:51.225453    4573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:51.229917 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:51.229932 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:40:51.258211 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:51.258321 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 01:40:48.248802 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:40:50.748310 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:40:53.799763 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:53.811321 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:53.811400 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:53.838808 1685746 cri.go:96] found id: ""
	I1222 01:40:53.838834 1685746 logs.go:282] 0 containers: []
	W1222 01:40:53.838844 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:53.838851 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:53.838918 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:53.865906 1685746 cri.go:96] found id: ""
	I1222 01:40:53.865930 1685746 logs.go:282] 0 containers: []
	W1222 01:40:53.865938 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:53.865945 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:53.866008 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:53.891986 1685746 cri.go:96] found id: ""
	I1222 01:40:53.892030 1685746 logs.go:282] 0 containers: []
	W1222 01:40:53.892040 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:53.892047 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:53.892120 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:53.918633 1685746 cri.go:96] found id: ""
	I1222 01:40:53.918660 1685746 logs.go:282] 0 containers: []
	W1222 01:40:53.918670 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:53.918677 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:53.918748 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:53.945224 1685746 cri.go:96] found id: ""
	I1222 01:40:53.945259 1685746 logs.go:282] 0 containers: []
	W1222 01:40:53.945268 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:53.945274 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:53.945345 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:53.976181 1685746 cri.go:96] found id: ""
	I1222 01:40:53.976207 1685746 logs.go:282] 0 containers: []
	W1222 01:40:53.976216 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:53.976223 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:53.976286 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:54.017529 1685746 cri.go:96] found id: ""
	I1222 01:40:54.017609 1685746 logs.go:282] 0 containers: []
	W1222 01:40:54.017633 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:54.017657 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:54.017766 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:54.050157 1685746 cri.go:96] found id: ""
	I1222 01:40:54.050234 1685746 logs.go:282] 0 containers: []
	W1222 01:40:54.050257 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:54.050284 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:54.050322 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:54.107873 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:54.107911 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:40:54.123115 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:54.123192 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:54.189938 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:54.180462    4689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:54.181036    4689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:54.182810    4689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:54.183522    4689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:54.185321    4689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:54.180462    4689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:54.181036    4689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:54.182810    4689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:54.183522    4689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:54.185321    4689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:54.189963 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:54.189976 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:40:54.216904 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:54.216959 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 01:40:53.248434 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:40:55.748007 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:40:57.748191 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:40:56.757953 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:56.769647 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:56.769793 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:56.802913 1685746 cri.go:96] found id: ""
	I1222 01:40:56.802941 1685746 logs.go:282] 0 containers: []
	W1222 01:40:56.802951 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:56.802958 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:56.803018 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:56.828625 1685746 cri.go:96] found id: ""
	I1222 01:40:56.828654 1685746 logs.go:282] 0 containers: []
	W1222 01:40:56.828664 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:56.828671 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:56.828734 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:56.853350 1685746 cri.go:96] found id: ""
	I1222 01:40:56.853378 1685746 logs.go:282] 0 containers: []
	W1222 01:40:56.853388 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:56.853394 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:56.853456 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:56.883418 1685746 cri.go:96] found id: ""
	I1222 01:40:56.883443 1685746 logs.go:282] 0 containers: []
	W1222 01:40:56.883458 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:56.883466 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:56.883532 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:56.912769 1685746 cri.go:96] found id: ""
	I1222 01:40:56.912799 1685746 logs.go:282] 0 containers: []
	W1222 01:40:56.912809 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:56.912817 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:56.912880 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:56.938494 1685746 cri.go:96] found id: ""
	I1222 01:40:56.938519 1685746 logs.go:282] 0 containers: []
	W1222 01:40:56.938529 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:56.938536 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:56.938602 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:56.968944 1685746 cri.go:96] found id: ""
	I1222 01:40:56.968978 1685746 logs.go:282] 0 containers: []
	W1222 01:40:56.968987 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:56.968994 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:56.969063 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:56.995238 1685746 cri.go:96] found id: ""
	I1222 01:40:56.995265 1685746 logs.go:282] 0 containers: []
	W1222 01:40:56.995274 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:56.995284 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:40:56.995295 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:40:57.022601 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:40:57.022641 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:40:57.055915 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:57.055993 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:57.110958 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:57.110993 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:40:57.126557 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:40:57.126587 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:40:57.199192 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:40:57.188757    4814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:57.189807    4814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:57.191598    4814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:57.192842    4814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:57.194605    4814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:40:57.188757    4814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:57.189807    4814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:57.191598    4814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:57.192842    4814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:40:57.194605    4814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:40:59.699460 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:40:59.709928 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:40:59.709999 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:40:59.734831 1685746 cri.go:96] found id: ""
	I1222 01:40:59.734861 1685746 logs.go:282] 0 containers: []
	W1222 01:40:59.734870 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:40:59.734876 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:40:59.734939 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:40:59.766737 1685746 cri.go:96] found id: ""
	I1222 01:40:59.766765 1685746 logs.go:282] 0 containers: []
	W1222 01:40:59.766773 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:40:59.766785 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:40:59.766845 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:40:59.800714 1685746 cri.go:96] found id: ""
	I1222 01:40:59.800742 1685746 logs.go:282] 0 containers: []
	W1222 01:40:59.800751 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:40:59.800757 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:40:59.800817 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:40:59.828842 1685746 cri.go:96] found id: ""
	I1222 01:40:59.828871 1685746 logs.go:282] 0 containers: []
	W1222 01:40:59.828880 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:40:59.828888 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:40:59.828951 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:40:59.854824 1685746 cri.go:96] found id: ""
	I1222 01:40:59.854848 1685746 logs.go:282] 0 containers: []
	W1222 01:40:59.854857 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:40:59.854864 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:40:59.854928 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:40:59.879691 1685746 cri.go:96] found id: ""
	I1222 01:40:59.879761 1685746 logs.go:282] 0 containers: []
	W1222 01:40:59.879784 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:40:59.879798 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:40:59.879874 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:40:59.905099 1685746 cri.go:96] found id: ""
	I1222 01:40:59.905136 1685746 logs.go:282] 0 containers: []
	W1222 01:40:59.905146 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:40:59.905152 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:40:59.905232 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:40:59.929727 1685746 cri.go:96] found id: ""
	I1222 01:40:59.929763 1685746 logs.go:282] 0 containers: []
	W1222 01:40:59.929775 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:40:59.929784 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:40:59.929794 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:40:59.985430 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:40:59.985466 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:00.001212 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:00.001238 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:00.267041 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:00.255488    4915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:00.258122    4915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:00.259215    4915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:00.260139    4915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:00.261094    4915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:00.255488    4915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:00.258122    4915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:00.259215    4915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:00.260139    4915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:00.261094    4915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:00.267072 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:00.267085 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:00.299707 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:00.299756 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 01:41:00.248610 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:41:02.248653 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:41:02.866175 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:02.877065 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:02.877139 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:02.902030 1685746 cri.go:96] found id: ""
	I1222 01:41:02.902137 1685746 logs.go:282] 0 containers: []
	W1222 01:41:02.902161 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:02.902183 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:02.902277 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:02.928023 1685746 cri.go:96] found id: ""
	I1222 01:41:02.928048 1685746 logs.go:282] 0 containers: []
	W1222 01:41:02.928058 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:02.928065 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:02.928128 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:02.958559 1685746 cri.go:96] found id: ""
	I1222 01:41:02.958595 1685746 logs.go:282] 0 containers: []
	W1222 01:41:02.958605 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:02.958612 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:02.958675 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:02.984249 1685746 cri.go:96] found id: ""
	I1222 01:41:02.984272 1685746 logs.go:282] 0 containers: []
	W1222 01:41:02.984281 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:02.984287 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:02.984355 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:03.033125 1685746 cri.go:96] found id: ""
	I1222 01:41:03.033152 1685746 logs.go:282] 0 containers: []
	W1222 01:41:03.033161 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:03.033167 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:03.033228 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:03.058557 1685746 cri.go:96] found id: ""
	I1222 01:41:03.058583 1685746 logs.go:282] 0 containers: []
	W1222 01:41:03.058591 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:03.058598 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:03.058657 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:03.089068 1685746 cri.go:96] found id: ""
	I1222 01:41:03.089112 1685746 logs.go:282] 0 containers: []
	W1222 01:41:03.089122 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:03.089132 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:03.089210 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:03.119177 1685746 cri.go:96] found id: ""
	I1222 01:41:03.119201 1685746 logs.go:282] 0 containers: []
	W1222 01:41:03.119210 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:03.119220 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:03.119231 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:03.182970 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:03.173888    5021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:03.174799    5021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:03.176762    5021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:03.177121    5021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:03.178608    5021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:03.173888    5021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:03.174799    5021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:03.176762    5021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:03.177121    5021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:03.178608    5021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:03.183000 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:03.183013 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:03.207694 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:03.207726 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:41:03.238481 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:03.238559 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:03.311496 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:03.311531 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:05.829656 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:05.840301 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:05.840394 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:05.867057 1685746 cri.go:96] found id: ""
	I1222 01:41:05.867080 1685746 logs.go:282] 0 containers: []
	W1222 01:41:05.867089 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:05.867095 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:05.867155 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:05.897184 1685746 cri.go:96] found id: ""
	I1222 01:41:05.897206 1685746 logs.go:282] 0 containers: []
	W1222 01:41:05.897215 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:05.897221 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:05.897284 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:05.922902 1685746 cri.go:96] found id: ""
	I1222 01:41:05.922924 1685746 logs.go:282] 0 containers: []
	W1222 01:41:05.922933 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:05.922940 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:05.923001 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:05.947567 1685746 cri.go:96] found id: ""
	I1222 01:41:05.947591 1685746 logs.go:282] 0 containers: []
	W1222 01:41:05.947600 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:05.947606 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:05.947725 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:05.973767 1685746 cri.go:96] found id: ""
	I1222 01:41:05.973795 1685746 logs.go:282] 0 containers: []
	W1222 01:41:05.973803 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:05.973810 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:05.973870 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:05.999045 1685746 cri.go:96] found id: ""
	I1222 01:41:05.999075 1685746 logs.go:282] 0 containers: []
	W1222 01:41:05.999084 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:05.999090 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:05.999156 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:06.037292 1685746 cri.go:96] found id: ""
	I1222 01:41:06.037323 1685746 logs.go:282] 0 containers: []
	W1222 01:41:06.037331 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:06.037338 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:06.037403 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:06.063105 1685746 cri.go:96] found id: ""
	I1222 01:41:06.063136 1685746 logs.go:282] 0 containers: []
	W1222 01:41:06.063145 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:06.063155 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:06.063166 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:06.118645 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:06.118682 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:06.134249 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:06.134283 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:06.202948 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:06.194329    5136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:06.194939    5136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:06.196573    5136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:06.197084    5136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:06.198693    5136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:06.194329    5136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:06.194939    5136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:06.196573    5136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:06.197084    5136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:06.198693    5136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:06.202967 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:06.202978 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:06.227736 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:06.227770 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 01:41:04.248851 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:41:06.748841 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:41:08.763766 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:08.776166 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:08.776292 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:08.802744 1685746 cri.go:96] found id: ""
	I1222 01:41:08.802770 1685746 logs.go:282] 0 containers: []
	W1222 01:41:08.802780 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:08.802787 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:08.802897 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:08.829155 1685746 cri.go:96] found id: ""
	I1222 01:41:08.829196 1685746 logs.go:282] 0 containers: []
	W1222 01:41:08.829205 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:08.829212 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:08.829286 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:08.853323 1685746 cri.go:96] found id: ""
	I1222 01:41:08.853358 1685746 logs.go:282] 0 containers: []
	W1222 01:41:08.853368 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:08.853374 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:08.853442 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:08.878843 1685746 cri.go:96] found id: ""
	I1222 01:41:08.878871 1685746 logs.go:282] 0 containers: []
	W1222 01:41:08.878880 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:08.878887 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:08.878948 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:08.907348 1685746 cri.go:96] found id: ""
	I1222 01:41:08.907374 1685746 logs.go:282] 0 containers: []
	W1222 01:41:08.907383 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:08.907390 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:08.907459 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:08.935980 1685746 cri.go:96] found id: ""
	I1222 01:41:08.936006 1685746 logs.go:282] 0 containers: []
	W1222 01:41:08.936015 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:08.936022 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:08.936103 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:08.965110 1685746 cri.go:96] found id: ""
	I1222 01:41:08.965149 1685746 logs.go:282] 0 containers: []
	W1222 01:41:08.965159 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:08.965165 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:08.965240 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:08.991481 1685746 cri.go:96] found id: ""
	I1222 01:41:08.991509 1685746 logs.go:282] 0 containers: []
	W1222 01:41:08.991518 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:08.991527 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:08.991539 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:09.007297 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:09.007330 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:09.077476 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:09.069572    5248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:09.070224    5248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:09.071715    5248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:09.072134    5248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:09.073635    5248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:09.069572    5248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:09.070224    5248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:09.071715    5248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:09.072134    5248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:09.073635    5248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:09.077557 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:09.077597 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:09.102923 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:09.102958 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:41:09.131422 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:09.131450 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1222 01:41:09.248676 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:41:11.748069 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:41:11.686744 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:11.697606 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:11.697689 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:11.722593 1685746 cri.go:96] found id: ""
	I1222 01:41:11.722664 1685746 logs.go:282] 0 containers: []
	W1222 01:41:11.722686 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:11.722701 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:11.722796 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:11.767413 1685746 cri.go:96] found id: ""
	I1222 01:41:11.767439 1685746 logs.go:282] 0 containers: []
	W1222 01:41:11.767448 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:11.767454 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:11.767526 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:11.800344 1685746 cri.go:96] found id: ""
	I1222 01:41:11.800433 1685746 logs.go:282] 0 containers: []
	W1222 01:41:11.800466 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:11.800487 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:11.800594 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:11.836608 1685746 cri.go:96] found id: ""
	I1222 01:41:11.836693 1685746 logs.go:282] 0 containers: []
	W1222 01:41:11.836717 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:11.836755 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:11.836854 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:11.862781 1685746 cri.go:96] found id: ""
	I1222 01:41:11.862808 1685746 logs.go:282] 0 containers: []
	W1222 01:41:11.862818 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:11.862830 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:11.862894 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:11.891376 1685746 cri.go:96] found id: ""
	I1222 01:41:11.891401 1685746 logs.go:282] 0 containers: []
	W1222 01:41:11.891410 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:11.891416 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:11.891480 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:11.920553 1685746 cri.go:96] found id: ""
	I1222 01:41:11.920581 1685746 logs.go:282] 0 containers: []
	W1222 01:41:11.920590 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:11.920596 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:11.920657 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:11.948610 1685746 cri.go:96] found id: ""
	I1222 01:41:11.948634 1685746 logs.go:282] 0 containers: []
	W1222 01:41:11.948642 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:11.948651 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:11.948662 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:12.006298 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:12.006340 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:12.022860 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:12.022889 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:12.087185 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:12.078758    5361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:12.079176    5361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:12.080965    5361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:12.081458    5361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:12.083372    5361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:12.078758    5361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:12.079176    5361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:12.080965    5361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:12.081458    5361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:12.083372    5361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:12.087252 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:12.087282 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:12.112381 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:12.112415 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:41:14.645175 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:14.655581 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:14.655655 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:14.683086 1685746 cri.go:96] found id: ""
	I1222 01:41:14.683110 1685746 logs.go:282] 0 containers: []
	W1222 01:41:14.683118 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:14.683125 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:14.683192 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:14.708684 1685746 cri.go:96] found id: ""
	I1222 01:41:14.708707 1685746 logs.go:282] 0 containers: []
	W1222 01:41:14.708716 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:14.708723 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:14.708783 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:14.733550 1685746 cri.go:96] found id: ""
	I1222 01:41:14.733572 1685746 logs.go:282] 0 containers: []
	W1222 01:41:14.733580 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:14.733586 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:14.733653 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:14.762029 1685746 cri.go:96] found id: ""
	I1222 01:41:14.762052 1685746 logs.go:282] 0 containers: []
	W1222 01:41:14.762061 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:14.762068 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:14.762191 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:14.802569 1685746 cri.go:96] found id: ""
	I1222 01:41:14.802593 1685746 logs.go:282] 0 containers: []
	W1222 01:41:14.802602 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:14.802609 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:14.802668 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:14.829402 1685746 cri.go:96] found id: ""
	I1222 01:41:14.829425 1685746 logs.go:282] 0 containers: []
	W1222 01:41:14.829434 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:14.829440 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:14.829499 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:14.854254 1685746 cri.go:96] found id: ""
	I1222 01:41:14.854276 1685746 logs.go:282] 0 containers: []
	W1222 01:41:14.854285 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:14.854291 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:14.854350 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:14.879183 1685746 cri.go:96] found id: ""
	I1222 01:41:14.879205 1685746 logs.go:282] 0 containers: []
	W1222 01:41:14.879213 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:14.879222 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:14.879239 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:14.933758 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:14.933795 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:14.948809 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:14.948834 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:15.022478 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:15.005575    5474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:15.006312    5474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:15.008255    5474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:15.009052    5474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:15.011873    5474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:15.005575    5474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:15.006312    5474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:15.008255    5474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:15.009052    5474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:15.011873    5474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:15.022594 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:15.022610 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:15.071291 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:15.071336 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 01:41:14.248149 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:41:16.748036 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:41:17.608065 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:17.618810 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:17.618881 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:17.643606 1685746 cri.go:96] found id: ""
	I1222 01:41:17.643633 1685746 logs.go:282] 0 containers: []
	W1222 01:41:17.643643 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:17.643650 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:17.643760 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:17.669609 1685746 cri.go:96] found id: ""
	I1222 01:41:17.669639 1685746 logs.go:282] 0 containers: []
	W1222 01:41:17.669649 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:17.669656 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:17.669725 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:17.694910 1685746 cri.go:96] found id: ""
	I1222 01:41:17.694934 1685746 logs.go:282] 0 containers: []
	W1222 01:41:17.694943 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:17.694950 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:17.695009 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:17.721067 1685746 cri.go:96] found id: ""
	I1222 01:41:17.721101 1685746 logs.go:282] 0 containers: []
	W1222 01:41:17.721111 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:17.721118 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:17.721251 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:17.762594 1685746 cri.go:96] found id: ""
	I1222 01:41:17.762669 1685746 logs.go:282] 0 containers: []
	W1222 01:41:17.762691 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:17.762715 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:17.762802 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:17.806835 1685746 cri.go:96] found id: ""
	I1222 01:41:17.806870 1685746 logs.go:282] 0 containers: []
	W1222 01:41:17.806880 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:17.806887 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:17.806964 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:17.837236 1685746 cri.go:96] found id: ""
	I1222 01:41:17.837273 1685746 logs.go:282] 0 containers: []
	W1222 01:41:17.837284 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:17.837291 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:17.837362 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:17.867730 1685746 cri.go:96] found id: ""
	I1222 01:41:17.867802 1685746 logs.go:282] 0 containers: []
	W1222 01:41:17.867825 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:17.867840 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:17.867852 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:17.927517 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:17.927555 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:17.943454 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:17.943484 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:18.012436 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:18.001362    5586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:18.002881    5586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:18.003527    5586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:18.005497    5586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:18.006107    5586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:18.001362    5586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:18.002881    5586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:18.003527    5586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:18.005497    5586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:18.006107    5586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:18.012522 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:18.012553 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:18.040219 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:18.040262 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:41:20.572279 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:20.583193 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:20.583266 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:20.609051 1685746 cri.go:96] found id: ""
	I1222 01:41:20.609075 1685746 logs.go:282] 0 containers: []
	W1222 01:41:20.609083 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:20.609089 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:20.609150 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:20.635365 1685746 cri.go:96] found id: ""
	I1222 01:41:20.635391 1685746 logs.go:282] 0 containers: []
	W1222 01:41:20.635400 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:20.635406 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:20.635470 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:20.664505 1685746 cri.go:96] found id: ""
	I1222 01:41:20.664532 1685746 logs.go:282] 0 containers: []
	W1222 01:41:20.664541 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:20.664547 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:20.664609 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:20.690863 1685746 cri.go:96] found id: ""
	I1222 01:41:20.690887 1685746 logs.go:282] 0 containers: []
	W1222 01:41:20.690904 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:20.690916 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:20.690981 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:20.716167 1685746 cri.go:96] found id: ""
	I1222 01:41:20.716188 1685746 logs.go:282] 0 containers: []
	W1222 01:41:20.716196 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:20.716203 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:20.716262 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:20.758512 1685746 cri.go:96] found id: ""
	I1222 01:41:20.758538 1685746 logs.go:282] 0 containers: []
	W1222 01:41:20.758547 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:20.758554 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:20.758612 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:20.789839 1685746 cri.go:96] found id: ""
	I1222 01:41:20.789866 1685746 logs.go:282] 0 containers: []
	W1222 01:41:20.789875 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:20.789882 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:20.789944 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:20.823216 1685746 cri.go:96] found id: ""
	I1222 01:41:20.823244 1685746 logs.go:282] 0 containers: []
	W1222 01:41:20.823254 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:20.823263 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:20.823275 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:20.878834 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:20.878873 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:20.894375 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:20.894409 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:20.963456 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:20.954882    5700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:20.955717    5700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:20.957444    5700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:20.957756    5700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:20.959270    5700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:20.954882    5700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:20.955717    5700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:20.957444    5700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:20.957756    5700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:20.959270    5700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:20.963479 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:20.963518 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:20.992875 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:20.992916 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 01:41:18.748733 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:41:21.248234 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:41:23.526237 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:23.540126 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:23.540244 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:23.567806 1685746 cri.go:96] found id: ""
	I1222 01:41:23.567833 1685746 logs.go:282] 0 containers: []
	W1222 01:41:23.567842 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:23.567849 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:23.567915 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:23.594496 1685746 cri.go:96] found id: ""
	I1222 01:41:23.594525 1685746 logs.go:282] 0 containers: []
	W1222 01:41:23.594538 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:23.594546 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:23.594614 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:23.621007 1685746 cri.go:96] found id: ""
	I1222 01:41:23.621034 1685746 logs.go:282] 0 containers: []
	W1222 01:41:23.621043 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:23.621050 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:23.621111 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:23.646829 1685746 cri.go:96] found id: ""
	I1222 01:41:23.646857 1685746 logs.go:282] 0 containers: []
	W1222 01:41:23.646867 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:23.646874 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:23.646941 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:23.672993 1685746 cri.go:96] found id: ""
	I1222 01:41:23.673020 1685746 logs.go:282] 0 containers: []
	W1222 01:41:23.673030 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:23.673036 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:23.673099 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:23.704873 1685746 cri.go:96] found id: ""
	I1222 01:41:23.704901 1685746 logs.go:282] 0 containers: []
	W1222 01:41:23.704910 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:23.704916 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:23.704980 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:23.731220 1685746 cri.go:96] found id: ""
	I1222 01:41:23.731248 1685746 logs.go:282] 0 containers: []
	W1222 01:41:23.731259 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:23.731265 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:23.731330 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:23.769641 1685746 cri.go:96] found id: ""
	I1222 01:41:23.769669 1685746 logs.go:282] 0 containers: []
	W1222 01:41:23.769678 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:23.769687 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:23.769701 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:41:23.811900 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:23.811928 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:23.870851 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:23.870887 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:23.886411 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:23.886488 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:23.954566 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:23.945665    5826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:23.946254    5826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:23.948052    5826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:23.948665    5826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:23.950507    5826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:23.945665    5826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:23.946254    5826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:23.948052    5826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:23.948665    5826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:23.950507    5826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:23.954588 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:23.954602 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:26.483766 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:26.495024 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:26.495100 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:26.521679 1685746 cri.go:96] found id: ""
	I1222 01:41:26.521706 1685746 logs.go:282] 0 containers: []
	W1222 01:41:26.521716 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:26.521723 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:26.521786 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:26.552746 1685746 cri.go:96] found id: ""
	I1222 01:41:26.552773 1685746 logs.go:282] 0 containers: []
	W1222 01:41:26.552782 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:26.552789 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:26.552856 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:26.580045 1685746 cri.go:96] found id: ""
	I1222 01:41:26.580072 1685746 logs.go:282] 0 containers: []
	W1222 01:41:26.580082 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:26.580088 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:26.580151 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:26.606656 1685746 cri.go:96] found id: ""
	I1222 01:41:26.606683 1685746 logs.go:282] 0 containers: []
	W1222 01:41:26.606693 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:26.606700 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:26.606759 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:26.632499 1685746 cri.go:96] found id: ""
	I1222 01:41:26.632539 1685746 logs.go:282] 0 containers: []
	W1222 01:41:26.632548 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:26.632556 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:26.632640 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:26.664045 1685746 cri.go:96] found id: ""
	I1222 01:41:26.664072 1685746 logs.go:282] 0 containers: []
	W1222 01:41:26.664082 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:26.664089 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:26.664172 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	W1222 01:41:23.248384 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:41:25.748529 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:41:27.748967 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:41:26.689648 1685746 cri.go:96] found id: ""
	I1222 01:41:26.689672 1685746 logs.go:282] 0 containers: []
	W1222 01:41:26.689693 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:26.689704 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:26.689772 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:26.715926 1685746 cri.go:96] found id: ""
	I1222 01:41:26.715949 1685746 logs.go:282] 0 containers: []
	W1222 01:41:26.715958 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:26.715966 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:26.715977 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:26.779696 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:26.779785 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:26.802335 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:26.802412 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:26.866575 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:26.857663    5928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:26.858479    5928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:26.860256    5928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:26.860898    5928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:26.862675    5928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:26.857663    5928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:26.858479    5928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:26.860256    5928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:26.860898    5928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:26.862675    5928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:26.866599 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:26.866613 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:26.893136 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:26.893176 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:41:29.425895 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:29.438488 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:29.438569 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:29.467384 1685746 cri.go:96] found id: ""
	I1222 01:41:29.467415 1685746 logs.go:282] 0 containers: []
	W1222 01:41:29.467426 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:29.467432 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:29.467497 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:29.502253 1685746 cri.go:96] found id: ""
	I1222 01:41:29.502277 1685746 logs.go:282] 0 containers: []
	W1222 01:41:29.502285 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:29.502291 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:29.502351 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:29.538703 1685746 cri.go:96] found id: ""
	I1222 01:41:29.538730 1685746 logs.go:282] 0 containers: []
	W1222 01:41:29.538739 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:29.538747 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:29.538809 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:29.567395 1685746 cri.go:96] found id: ""
	I1222 01:41:29.567422 1685746 logs.go:282] 0 containers: []
	W1222 01:41:29.567431 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:29.567439 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:29.567500 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:29.595415 1685746 cri.go:96] found id: ""
	I1222 01:41:29.595493 1685746 logs.go:282] 0 containers: []
	W1222 01:41:29.595508 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:29.595516 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:29.595583 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:29.622583 1685746 cri.go:96] found id: ""
	I1222 01:41:29.622611 1685746 logs.go:282] 0 containers: []
	W1222 01:41:29.622620 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:29.622627 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:29.622693 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:29.649130 1685746 cri.go:96] found id: ""
	I1222 01:41:29.649156 1685746 logs.go:282] 0 containers: []
	W1222 01:41:29.649166 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:29.649173 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:29.649240 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:29.676205 1685746 cri.go:96] found id: ""
	I1222 01:41:29.676231 1685746 logs.go:282] 0 containers: []
	W1222 01:41:29.676240 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:29.676250 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:29.676279 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:29.731980 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:29.732016 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:29.747474 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:29.747503 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:29.833319 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:29.822964    6042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:29.824836    6042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:29.825723    6042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:29.827546    6042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:29.829272    6042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:29.822964    6042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:29.824836    6042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:29.825723    6042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:29.827546    6042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:29.829272    6042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:29.833342 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:29.833355 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:29.859398 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:29.859432 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 01:41:30.247999 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:41:32.248426 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:41:32.387755 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:32.398548 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:32.398639 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:32.422848 1685746 cri.go:96] found id: ""
	I1222 01:41:32.422870 1685746 logs.go:282] 0 containers: []
	W1222 01:41:32.422879 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:32.422885 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:32.422976 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:32.448126 1685746 cri.go:96] found id: ""
	I1222 01:41:32.448153 1685746 logs.go:282] 0 containers: []
	W1222 01:41:32.448162 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:32.448171 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:32.448233 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:32.476732 1685746 cri.go:96] found id: ""
	I1222 01:41:32.476769 1685746 logs.go:282] 0 containers: []
	W1222 01:41:32.476779 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:32.476785 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:32.476856 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:32.521856 1685746 cri.go:96] found id: ""
	I1222 01:41:32.521885 1685746 logs.go:282] 0 containers: []
	W1222 01:41:32.521915 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:32.521923 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:32.522010 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:32.559083 1685746 cri.go:96] found id: ""
	I1222 01:41:32.559112 1685746 logs.go:282] 0 containers: []
	W1222 01:41:32.559121 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:32.559128 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:32.559199 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:32.585037 1685746 cri.go:96] found id: ""
	I1222 01:41:32.585066 1685746 logs.go:282] 0 containers: []
	W1222 01:41:32.585076 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:32.585082 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:32.585142 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:32.611094 1685746 cri.go:96] found id: ""
	I1222 01:41:32.611117 1685746 logs.go:282] 0 containers: []
	W1222 01:41:32.611126 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:32.611132 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:32.611200 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:32.636572 1685746 cri.go:96] found id: ""
	I1222 01:41:32.636598 1685746 logs.go:282] 0 containers: []
	W1222 01:41:32.636606 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:32.636614 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:32.636626 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:32.691721 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:32.691756 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:32.706757 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:32.706791 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:32.784203 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:32.775781    6150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:32.776686    6150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:32.778481    6150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:32.778815    6150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:32.780276    6150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:32.775781    6150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:32.776686    6150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:32.778481    6150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:32.778815    6150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:32.780276    6150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:32.784277 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:32.784302 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:32.812067 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:32.812099 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:41:35.344181 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:35.354549 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:35.354621 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:35.378138 1685746 cri.go:96] found id: ""
	I1222 01:41:35.378160 1685746 logs.go:282] 0 containers: []
	W1222 01:41:35.378169 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:35.378177 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:35.378236 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:35.403725 1685746 cri.go:96] found id: ""
	I1222 01:41:35.403748 1685746 logs.go:282] 0 containers: []
	W1222 01:41:35.403757 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:35.403764 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:35.403825 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:35.429025 1685746 cri.go:96] found id: ""
	I1222 01:41:35.429050 1685746 logs.go:282] 0 containers: []
	W1222 01:41:35.429059 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:35.429066 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:35.429129 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:35.459607 1685746 cri.go:96] found id: ""
	I1222 01:41:35.459633 1685746 logs.go:282] 0 containers: []
	W1222 01:41:35.459642 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:35.459649 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:35.459707 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:35.483992 1685746 cri.go:96] found id: ""
	I1222 01:41:35.484015 1685746 logs.go:282] 0 containers: []
	W1222 01:41:35.484024 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:35.484031 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:35.484094 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:35.517254 1685746 cri.go:96] found id: ""
	I1222 01:41:35.517277 1685746 logs.go:282] 0 containers: []
	W1222 01:41:35.517286 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:35.517293 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:35.517353 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:35.546137 1685746 cri.go:96] found id: ""
	I1222 01:41:35.546219 1685746 logs.go:282] 0 containers: []
	W1222 01:41:35.546242 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:35.546284 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:35.546378 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:35.576307 1685746 cri.go:96] found id: ""
	I1222 01:41:35.576329 1685746 logs.go:282] 0 containers: []
	W1222 01:41:35.576338 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:35.576347 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:35.576358 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:35.631853 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:35.631887 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:35.646787 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:35.646827 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:35.713895 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:35.705676    6260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:35.706509    6260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:35.708048    6260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:35.708614    6260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:35.710294    6260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:35.705676    6260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:35.706509    6260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:35.708048    6260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:35.708614    6260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:35.710294    6260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:35.713927 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:35.713943 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:35.739168 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:35.739250 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 01:41:34.248875 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:41:36.748177 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:41:38.278358 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:38.289460 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:38.289534 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:38.316292 1685746 cri.go:96] found id: ""
	I1222 01:41:38.316320 1685746 logs.go:282] 0 containers: []
	W1222 01:41:38.316329 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:38.316336 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:38.316416 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:38.344932 1685746 cri.go:96] found id: ""
	I1222 01:41:38.344960 1685746 logs.go:282] 0 containers: []
	W1222 01:41:38.344969 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:38.344976 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:38.345038 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:38.371484 1685746 cri.go:96] found id: ""
	I1222 01:41:38.371509 1685746 logs.go:282] 0 containers: []
	W1222 01:41:38.371519 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:38.371525 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:38.371594 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:38.401114 1685746 cri.go:96] found id: ""
	I1222 01:41:38.401140 1685746 logs.go:282] 0 containers: []
	W1222 01:41:38.401149 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:38.401157 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:38.401217 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:38.427857 1685746 cri.go:96] found id: ""
	I1222 01:41:38.427881 1685746 logs.go:282] 0 containers: []
	W1222 01:41:38.427890 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:38.427897 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:38.427962 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:38.453333 1685746 cri.go:96] found id: ""
	I1222 01:41:38.453358 1685746 logs.go:282] 0 containers: []
	W1222 01:41:38.453367 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:38.453374 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:38.453455 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:38.477527 1685746 cri.go:96] found id: ""
	I1222 01:41:38.477610 1685746 logs.go:282] 0 containers: []
	W1222 01:41:38.477633 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:38.477655 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:38.477748 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:38.523741 1685746 cri.go:96] found id: ""
	I1222 01:41:38.523763 1685746 logs.go:282] 0 containers: []
	W1222 01:41:38.523772 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:38.523787 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:38.523798 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:38.595469 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:38.587236    6363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:38.587861    6363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:38.589425    6363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:38.589886    6363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:38.591514    6363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:38.587236    6363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:38.587861    6363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:38.589425    6363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:38.589886    6363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:38.591514    6363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:38.595491 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:38.595508 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:38.621769 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:38.621808 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:41:38.651477 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:38.651507 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:38.710896 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:38.710934 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:41.227040 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:41.237881 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:41.237954 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:41.265636 1685746 cri.go:96] found id: ""
	I1222 01:41:41.265671 1685746 logs.go:282] 0 containers: []
	W1222 01:41:41.265680 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:41.265687 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:41.265757 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:41.291304 1685746 cri.go:96] found id: ""
	I1222 01:41:41.291330 1685746 logs.go:282] 0 containers: []
	W1222 01:41:41.291339 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:41.291346 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:41.291414 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:41.316968 1685746 cri.go:96] found id: ""
	I1222 01:41:41.317003 1685746 logs.go:282] 0 containers: []
	W1222 01:41:41.317013 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:41.317020 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:41.317094 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:41.342750 1685746 cri.go:96] found id: ""
	I1222 01:41:41.342779 1685746 logs.go:282] 0 containers: []
	W1222 01:41:41.342794 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:41.342801 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:41.342865 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:41.368173 1685746 cri.go:96] found id: ""
	I1222 01:41:41.368197 1685746 logs.go:282] 0 containers: []
	W1222 01:41:41.368205 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:41.368212 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:41.368275 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:41.396263 1685746 cri.go:96] found id: ""
	I1222 01:41:41.396290 1685746 logs.go:282] 0 containers: []
	W1222 01:41:41.396300 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:41.396308 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:41.396380 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:41.424002 1685746 cri.go:96] found id: ""
	I1222 01:41:41.424028 1685746 logs.go:282] 0 containers: []
	W1222 01:41:41.424037 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:41.424044 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:41.424104 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:41.450858 1685746 cri.go:96] found id: ""
	I1222 01:41:41.450886 1685746 logs.go:282] 0 containers: []
	W1222 01:41:41.450894 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:41.450904 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:41.450915 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:41.510703 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:41.510785 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:41.529398 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:41.529475 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:41.596968 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:41.588901    6483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:41.589540    6483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:41.591264    6483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:41.591828    6483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:41.593389    6483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:41.588901    6483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:41.589540    6483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:41.591264    6483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:41.591828    6483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:41.593389    6483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:41.596989 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:41.597002 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:41.623436 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:41.623472 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 01:41:39.248106 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:41:41.748067 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:41:44.153585 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:44.164792 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:44.164865 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:44.190259 1685746 cri.go:96] found id: ""
	I1222 01:41:44.190282 1685746 logs.go:282] 0 containers: []
	W1222 01:41:44.190290 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:44.190297 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:44.190357 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:44.223886 1685746 cri.go:96] found id: ""
	I1222 01:41:44.223911 1685746 logs.go:282] 0 containers: []
	W1222 01:41:44.223922 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:44.223929 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:44.223988 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:44.249898 1685746 cri.go:96] found id: ""
	I1222 01:41:44.249922 1685746 logs.go:282] 0 containers: []
	W1222 01:41:44.249931 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:44.249948 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:44.250010 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:44.275190 1685746 cri.go:96] found id: ""
	I1222 01:41:44.275217 1685746 logs.go:282] 0 containers: []
	W1222 01:41:44.275227 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:44.275233 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:44.275325 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:44.301198 1685746 cri.go:96] found id: ""
	I1222 01:41:44.301221 1685746 logs.go:282] 0 containers: []
	W1222 01:41:44.301230 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:44.301237 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:44.301311 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:44.325952 1685746 cri.go:96] found id: ""
	I1222 01:41:44.325990 1685746 logs.go:282] 0 containers: []
	W1222 01:41:44.326000 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:44.326023 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:44.326154 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:44.352189 1685746 cri.go:96] found id: ""
	I1222 01:41:44.352227 1685746 logs.go:282] 0 containers: []
	W1222 01:41:44.352236 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:44.352259 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:44.352334 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:44.377820 1685746 cri.go:96] found id: ""
	I1222 01:41:44.377848 1685746 logs.go:282] 0 containers: []
	W1222 01:41:44.377858 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:44.377868 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:44.377879 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:44.393230 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:44.393258 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:44.463151 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:44.454750    6594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:44.455259    6594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:44.456851    6594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:44.457499    6594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:44.458487    6594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:44.454750    6594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:44.455259    6594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:44.456851    6594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:44.457499    6594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:44.458487    6594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:44.463175 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:44.463188 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:44.488611 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:44.488690 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:41:44.523935 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:44.524011 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1222 01:41:44.248599 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:41:46.748094 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:41:47.091277 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:47.102299 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:47.102374 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:47.128309 1685746 cri.go:96] found id: ""
	I1222 01:41:47.128334 1685746 logs.go:282] 0 containers: []
	W1222 01:41:47.128344 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:47.128351 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:47.128431 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:47.154429 1685746 cri.go:96] found id: ""
	I1222 01:41:47.154456 1685746 logs.go:282] 0 containers: []
	W1222 01:41:47.154465 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:47.154473 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:47.154535 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:47.179829 1685746 cri.go:96] found id: ""
	I1222 01:41:47.179856 1685746 logs.go:282] 0 containers: []
	W1222 01:41:47.179865 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:47.179872 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:47.179933 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:47.204965 1685746 cri.go:96] found id: ""
	I1222 01:41:47.204999 1685746 logs.go:282] 0 containers: []
	W1222 01:41:47.205009 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:47.205016 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:47.205088 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:47.231912 1685746 cri.go:96] found id: ""
	I1222 01:41:47.231939 1685746 logs.go:282] 0 containers: []
	W1222 01:41:47.231949 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:47.231955 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:47.232043 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:47.262187 1685746 cri.go:96] found id: ""
	I1222 01:41:47.262215 1685746 logs.go:282] 0 containers: []
	W1222 01:41:47.262230 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:47.262237 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:47.262301 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:47.287536 1685746 cri.go:96] found id: ""
	I1222 01:41:47.287567 1685746 logs.go:282] 0 containers: []
	W1222 01:41:47.287577 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:47.287583 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:47.287648 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:47.313516 1685746 cri.go:96] found id: ""
	I1222 01:41:47.313544 1685746 logs.go:282] 0 containers: []
	W1222 01:41:47.313553 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:47.313563 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:47.313573 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:47.369295 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:47.369329 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:47.387169 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:47.387197 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:47.455311 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:47.445096    6709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:47.445837    6709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:47.447629    6709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:47.448117    6709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:47.449732    6709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:47.445096    6709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:47.445837    6709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:47.447629    6709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:47.448117    6709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:47.449732    6709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:47.455335 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:47.455347 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:47.481041 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:47.481078 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:41:50.030868 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:50.043616 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:50.043692 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:50.072180 1685746 cri.go:96] found id: ""
	I1222 01:41:50.072210 1685746 logs.go:282] 0 containers: []
	W1222 01:41:50.072220 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:50.072229 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:50.072297 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:50.100979 1685746 cri.go:96] found id: ""
	I1222 01:41:50.101005 1685746 logs.go:282] 0 containers: []
	W1222 01:41:50.101014 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:50.101021 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:50.101091 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:50.128360 1685746 cri.go:96] found id: ""
	I1222 01:41:50.128392 1685746 logs.go:282] 0 containers: []
	W1222 01:41:50.128404 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:50.128411 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:50.128476 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:50.154912 1685746 cri.go:96] found id: ""
	I1222 01:41:50.154945 1685746 logs.go:282] 0 containers: []
	W1222 01:41:50.154955 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:50.154963 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:50.155033 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:50.181433 1685746 cri.go:96] found id: ""
	I1222 01:41:50.181465 1685746 logs.go:282] 0 containers: []
	W1222 01:41:50.181474 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:50.181483 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:50.181553 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:50.207260 1685746 cri.go:96] found id: ""
	I1222 01:41:50.207289 1685746 logs.go:282] 0 containers: []
	W1222 01:41:50.207299 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:50.207305 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:50.207366 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:50.234601 1685746 cri.go:96] found id: ""
	I1222 01:41:50.234649 1685746 logs.go:282] 0 containers: []
	W1222 01:41:50.234659 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:50.234666 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:50.234744 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:50.264579 1685746 cri.go:96] found id: ""
	I1222 01:41:50.264621 1685746 logs.go:282] 0 containers: []
	W1222 01:41:50.264631 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:50.264641 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:50.264661 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:50.321078 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:50.321112 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:50.336044 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:50.336069 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:50.401373 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:50.392422    6821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:50.393059    6821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:50.394644    6821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:50.395244    6821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:50.396998    6821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:50.392422    6821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:50.393059    6821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:50.394644    6821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:50.395244    6821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:50.396998    6821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:50.401396 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:50.401410 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:50.428108 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:50.428151 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 01:41:48.749155 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:41:51.248977 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:41:52.958393 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:52.969793 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:52.969867 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:53.021307 1685746 cri.go:96] found id: ""
	I1222 01:41:53.021331 1685746 logs.go:282] 0 containers: []
	W1222 01:41:53.021340 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:53.021352 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:53.021415 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:53.053765 1685746 cri.go:96] found id: ""
	I1222 01:41:53.053789 1685746 logs.go:282] 0 containers: []
	W1222 01:41:53.053798 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:53.053804 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:53.053872 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:53.079107 1685746 cri.go:96] found id: ""
	I1222 01:41:53.079135 1685746 logs.go:282] 0 containers: []
	W1222 01:41:53.079144 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:53.079152 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:53.079214 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:53.106101 1685746 cri.go:96] found id: ""
	I1222 01:41:53.106130 1685746 logs.go:282] 0 containers: []
	W1222 01:41:53.106138 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:53.106145 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:53.106209 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:53.135616 1685746 cri.go:96] found id: ""
	I1222 01:41:53.135643 1685746 logs.go:282] 0 containers: []
	W1222 01:41:53.135652 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:53.135659 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:53.135766 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:53.160318 1685746 cri.go:96] found id: ""
	I1222 01:41:53.160344 1685746 logs.go:282] 0 containers: []
	W1222 01:41:53.160353 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:53.160360 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:53.160451 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:53.185257 1685746 cri.go:96] found id: ""
	I1222 01:41:53.185297 1685746 logs.go:282] 0 containers: []
	W1222 01:41:53.185306 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:53.185313 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:53.185401 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:53.210753 1685746 cri.go:96] found id: ""
	I1222 01:41:53.210824 1685746 logs.go:282] 0 containers: []
	W1222 01:41:53.210839 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:53.210855 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:53.210867 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:53.237290 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:53.237323 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:41:53.267342 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:53.267374 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:53.323394 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:53.323429 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:53.339435 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:53.339465 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:53.403286 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:53.395297    6950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:53.395794    6950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:53.397429    6950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:53.397875    6950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:53.399414    6950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:53.395297    6950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:53.395794    6950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:53.397429    6950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:53.397875    6950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:53.399414    6950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:55.903619 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:55.914760 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:55.914836 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:55.939507 1685746 cri.go:96] found id: ""
	I1222 01:41:55.939532 1685746 logs.go:282] 0 containers: []
	W1222 01:41:55.939541 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:55.939548 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:55.939614 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:55.965607 1685746 cri.go:96] found id: ""
	I1222 01:41:55.965633 1685746 logs.go:282] 0 containers: []
	W1222 01:41:55.965643 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:55.965649 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:55.965715 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:56.006138 1685746 cri.go:96] found id: ""
	I1222 01:41:56.006171 1685746 logs.go:282] 0 containers: []
	W1222 01:41:56.006181 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:56.006188 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:56.006256 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:56.040087 1685746 cri.go:96] found id: ""
	I1222 01:41:56.040116 1685746 logs.go:282] 0 containers: []
	W1222 01:41:56.040125 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:56.040131 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:56.040191 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:56.068695 1685746 cri.go:96] found id: ""
	I1222 01:41:56.068719 1685746 logs.go:282] 0 containers: []
	W1222 01:41:56.068727 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:56.068734 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:56.068795 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:56.096726 1685746 cri.go:96] found id: ""
	I1222 01:41:56.096808 1685746 logs.go:282] 0 containers: []
	W1222 01:41:56.096832 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:56.096854 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:56.096963 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:56.125548 1685746 cri.go:96] found id: ""
	I1222 01:41:56.125627 1685746 logs.go:282] 0 containers: []
	W1222 01:41:56.125652 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:56.125675 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:56.125763 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:56.150956 1685746 cri.go:96] found id: ""
	I1222 01:41:56.150986 1685746 logs.go:282] 0 containers: []
	W1222 01:41:56.150995 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:56.151005 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:56.151049 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:56.216560 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:56.208477    7045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:56.209149    7045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:56.210844    7045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:56.211410    7045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:56.212923    7045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:56.208477    7045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:56.209149    7045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:56.210844    7045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:56.211410    7045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:56.212923    7045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:56.216581 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:56.216594 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:56.242334 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:56.242368 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:41:56.270763 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:56.270793 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:56.325996 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:56.326038 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1222 01:41:53.748987 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:41:56.248859 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:41:58.841618 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:41:58.852321 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:41:58.852411 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:41:58.877439 1685746 cri.go:96] found id: ""
	I1222 01:41:58.877465 1685746 logs.go:282] 0 containers: []
	W1222 01:41:58.877475 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:41:58.877482 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:41:58.877542 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:41:58.902343 1685746 cri.go:96] found id: ""
	I1222 01:41:58.902369 1685746 logs.go:282] 0 containers: []
	W1222 01:41:58.902378 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:41:58.902385 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:41:58.902443 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:41:58.927733 1685746 cri.go:96] found id: ""
	I1222 01:41:58.927758 1685746 logs.go:282] 0 containers: []
	W1222 01:41:58.927767 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:41:58.927774 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:41:58.927834 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:41:58.954349 1685746 cri.go:96] found id: ""
	I1222 01:41:58.954374 1685746 logs.go:282] 0 containers: []
	W1222 01:41:58.954384 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:41:58.954391 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:41:58.954464 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:41:58.984449 1685746 cri.go:96] found id: ""
	I1222 01:41:58.984519 1685746 logs.go:282] 0 containers: []
	W1222 01:41:58.984533 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:41:58.984541 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:41:58.984612 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:41:59.020245 1685746 cri.go:96] found id: ""
	I1222 01:41:59.020277 1685746 logs.go:282] 0 containers: []
	W1222 01:41:59.020294 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:41:59.020303 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:41:59.020387 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:41:59.059067 1685746 cri.go:96] found id: ""
	I1222 01:41:59.059135 1685746 logs.go:282] 0 containers: []
	W1222 01:41:59.059157 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:41:59.059170 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:41:59.059244 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:41:59.090327 1685746 cri.go:96] found id: ""
	I1222 01:41:59.090355 1685746 logs.go:282] 0 containers: []
	W1222 01:41:59.090364 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:41:59.090372 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:41:59.090384 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:41:59.149768 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:41:59.149809 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:41:59.164825 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:41:59.164857 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:41:59.232698 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:41:59.223745    7163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:59.224279    7163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:59.226992    7163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:59.227434    7163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:59.228967    7163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:41:59.223745    7163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:59.224279    7163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:59.226992    7163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:59.227434    7163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:41:59.228967    7163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:41:59.232720 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:41:59.232734 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:41:59.258805 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:41:59.258840 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 01:41:58.748026 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:42:00.748292 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:42:01.787611 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:42:01.799088 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:42:01.799206 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:42:01.829442 1685746 cri.go:96] found id: ""
	I1222 01:42:01.829521 1685746 logs.go:282] 0 containers: []
	W1222 01:42:01.829543 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:42:01.829566 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:42:01.829657 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:42:01.856095 1685746 cri.go:96] found id: ""
	I1222 01:42:01.856122 1685746 logs.go:282] 0 containers: []
	W1222 01:42:01.856132 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:42:01.856139 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:42:01.856203 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:42:01.882443 1685746 cri.go:96] found id: ""
	I1222 01:42:01.882469 1685746 logs.go:282] 0 containers: []
	W1222 01:42:01.882478 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:42:01.882485 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:42:01.882549 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:42:01.908008 1685746 cri.go:96] found id: ""
	I1222 01:42:01.908033 1685746 logs.go:282] 0 containers: []
	W1222 01:42:01.908043 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:42:01.908049 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:42:01.908111 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:42:01.934350 1685746 cri.go:96] found id: ""
	I1222 01:42:01.934377 1685746 logs.go:282] 0 containers: []
	W1222 01:42:01.934386 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:42:01.934393 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:42:01.934457 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:42:01.960407 1685746 cri.go:96] found id: ""
	I1222 01:42:01.960433 1685746 logs.go:282] 0 containers: []
	W1222 01:42:01.960442 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:42:01.960449 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:42:01.960512 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:42:01.988879 1685746 cri.go:96] found id: ""
	I1222 01:42:01.988915 1685746 logs.go:282] 0 containers: []
	W1222 01:42:01.988925 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:42:01.988931 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:42:01.989000 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:42:02.021404 1685746 cri.go:96] found id: ""
	I1222 01:42:02.021444 1685746 logs.go:282] 0 containers: []
	W1222 01:42:02.021454 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:42:02.021464 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:42:02.021476 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:42:02.053252 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:42:02.053282 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:42:02.111509 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:42:02.111548 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:42:02.127002 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:42:02.127081 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:42:02.196408 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:42:02.188126    7282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:02.188871    7282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:02.190476    7282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:02.190840    7282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:02.192032    7282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:42:02.188126    7282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:02.188871    7282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:02.190476    7282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:02.190840    7282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:02.192032    7282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:42:02.196429 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:42:02.196442 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:42:04.723107 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:42:04.734699 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:42:04.734786 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:42:04.771439 1685746 cri.go:96] found id: ""
	I1222 01:42:04.771462 1685746 logs.go:282] 0 containers: []
	W1222 01:42:04.771471 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:42:04.771477 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:42:04.771540 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:42:04.806612 1685746 cri.go:96] found id: ""
	I1222 01:42:04.806639 1685746 logs.go:282] 0 containers: []
	W1222 01:42:04.806648 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:42:04.806655 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:42:04.806714 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:42:04.832290 1685746 cri.go:96] found id: ""
	I1222 01:42:04.832320 1685746 logs.go:282] 0 containers: []
	W1222 01:42:04.832329 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:42:04.832336 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:42:04.832404 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:42:04.860422 1685746 cri.go:96] found id: ""
	I1222 01:42:04.860460 1685746 logs.go:282] 0 containers: []
	W1222 01:42:04.860469 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:42:04.860494 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:42:04.860603 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:42:04.885397 1685746 cri.go:96] found id: ""
	I1222 01:42:04.885424 1685746 logs.go:282] 0 containers: []
	W1222 01:42:04.885433 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:42:04.885440 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:42:04.885524 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:42:04.910499 1685746 cri.go:96] found id: ""
	I1222 01:42:04.910529 1685746 logs.go:282] 0 containers: []
	W1222 01:42:04.910539 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:42:04.910546 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:42:04.910607 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:42:04.934849 1685746 cri.go:96] found id: ""
	I1222 01:42:04.934887 1685746 logs.go:282] 0 containers: []
	W1222 01:42:04.934897 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:42:04.934921 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:42:04.935013 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:42:04.964384 1685746 cri.go:96] found id: ""
	I1222 01:42:04.964411 1685746 logs.go:282] 0 containers: []
	W1222 01:42:04.964420 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:42:04.964429 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:42:04.964460 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:42:05.023249 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:42:05.023347 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:42:05.042677 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:42:05.042702 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:42:05.113125 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:42:05.104084    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:05.104668    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:05.106408    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:05.106980    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:05.108789    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:42:05.104084    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:05.104668    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:05.106408    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:05.106980    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:05.108789    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:42:05.113151 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:42:05.113167 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:42:05.139072 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:42:05.139109 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 01:42:03.248327 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:42:05.748676 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:42:07.672253 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:42:07.683433 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:42:07.683523 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:42:07.710000 1685746 cri.go:96] found id: ""
	I1222 01:42:07.710025 1685746 logs.go:282] 0 containers: []
	W1222 01:42:07.710033 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:42:07.710040 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:42:07.710129 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:42:07.749657 1685746 cri.go:96] found id: ""
	I1222 01:42:07.749685 1685746 logs.go:282] 0 containers: []
	W1222 01:42:07.749695 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:42:07.749702 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:42:07.749769 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:42:07.779817 1685746 cri.go:96] found id: ""
	I1222 01:42:07.779844 1685746 logs.go:282] 0 containers: []
	W1222 01:42:07.779853 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:42:07.779860 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:42:07.779920 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:42:07.809501 1685746 cri.go:96] found id: ""
	I1222 01:42:07.809529 1685746 logs.go:282] 0 containers: []
	W1222 01:42:07.809538 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:42:07.809546 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:42:07.809606 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:42:07.834291 1685746 cri.go:96] found id: ""
	I1222 01:42:07.834318 1685746 logs.go:282] 0 containers: []
	W1222 01:42:07.834327 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:42:07.834334 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:42:07.834395 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:42:07.859724 1685746 cri.go:96] found id: ""
	I1222 01:42:07.859791 1685746 logs.go:282] 0 containers: []
	W1222 01:42:07.859807 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:42:07.859814 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:42:07.859874 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:42:07.891259 1685746 cri.go:96] found id: ""
	I1222 01:42:07.891287 1685746 logs.go:282] 0 containers: []
	W1222 01:42:07.891296 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:42:07.891303 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:42:07.891362 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:42:07.916371 1685746 cri.go:96] found id: ""
	I1222 01:42:07.916451 1685746 logs.go:282] 0 containers: []
	W1222 01:42:07.916467 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:42:07.916477 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:42:07.916489 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:42:07.943955 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:42:07.943981 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:42:08.000957 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:42:08.001003 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:42:08.021265 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:42:08.021299 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:42:08.098699 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:42:08.089767    7510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:08.090657    7510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:08.092419    7510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:08.093123    7510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:08.094829    7510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:42:08.089767    7510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:08.090657    7510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:08.092419    7510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:08.093123    7510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:08.094829    7510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:42:08.098725 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:42:08.098739 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:42:10.625986 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:42:10.637185 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:42:10.637275 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:42:10.663011 1685746 cri.go:96] found id: ""
	I1222 01:42:10.663039 1685746 logs.go:282] 0 containers: []
	W1222 01:42:10.663048 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:42:10.663055 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:42:10.663121 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:42:10.689593 1685746 cri.go:96] found id: ""
	I1222 01:42:10.689623 1685746 logs.go:282] 0 containers: []
	W1222 01:42:10.689633 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:42:10.689639 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:42:10.689704 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:42:10.718520 1685746 cri.go:96] found id: ""
	I1222 01:42:10.718545 1685746 logs.go:282] 0 containers: []
	W1222 01:42:10.718554 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:42:10.718561 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:42:10.718627 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:42:10.748796 1685746 cri.go:96] found id: ""
	I1222 01:42:10.748829 1685746 logs.go:282] 0 containers: []
	W1222 01:42:10.748839 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:42:10.748846 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:42:10.748919 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:42:10.780456 1685746 cri.go:96] found id: ""
	I1222 01:42:10.780493 1685746 logs.go:282] 0 containers: []
	W1222 01:42:10.780508 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:42:10.780515 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:42:10.780591 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:42:10.810196 1685746 cri.go:96] found id: ""
	I1222 01:42:10.810234 1685746 logs.go:282] 0 containers: []
	W1222 01:42:10.810243 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:42:10.810250 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:42:10.810346 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:42:10.836475 1685746 cri.go:96] found id: ""
	I1222 01:42:10.836502 1685746 logs.go:282] 0 containers: []
	W1222 01:42:10.836511 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:42:10.836518 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:42:10.836582 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:42:10.862222 1685746 cri.go:96] found id: ""
	I1222 01:42:10.862246 1685746 logs.go:282] 0 containers: []
	W1222 01:42:10.862255 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:42:10.862264 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:42:10.862275 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:42:10.918613 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:42:10.918648 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:42:10.933449 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:42:10.933478 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:42:11.013628 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:42:11.003467    7604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:11.004191    7604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:11.006270    7604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:11.007253    7604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:11.009218    7604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:42:11.003467    7604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:11.004191    7604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:11.006270    7604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:11.007253    7604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:11.009218    7604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:42:11.013706 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:42:11.013738 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:42:11.042713 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:42:11.042803 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 01:42:08.248287 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:42:10.748100 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:42:12.748911 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:42:13.581897 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:42:13.592897 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:42:13.592969 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:42:13.621158 1685746 cri.go:96] found id: ""
	I1222 01:42:13.621184 1685746 logs.go:282] 0 containers: []
	W1222 01:42:13.621194 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:42:13.621200 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:42:13.621265 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:42:13.646742 1685746 cri.go:96] found id: ""
	I1222 01:42:13.646769 1685746 logs.go:282] 0 containers: []
	W1222 01:42:13.646778 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:42:13.646784 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:42:13.646843 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:42:13.671981 1685746 cri.go:96] found id: ""
	I1222 01:42:13.672014 1685746 logs.go:282] 0 containers: []
	W1222 01:42:13.672023 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:42:13.672030 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:42:13.672093 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:42:13.697359 1685746 cri.go:96] found id: ""
	I1222 01:42:13.697387 1685746 logs.go:282] 0 containers: []
	W1222 01:42:13.697397 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:42:13.697408 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:42:13.697471 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:42:13.723455 1685746 cri.go:96] found id: ""
	I1222 01:42:13.723481 1685746 logs.go:282] 0 containers: []
	W1222 01:42:13.723491 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:42:13.723499 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:42:13.723560 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:42:13.762227 1685746 cri.go:96] found id: ""
	I1222 01:42:13.762251 1685746 logs.go:282] 0 containers: []
	W1222 01:42:13.762259 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:42:13.762266 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:42:13.762325 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:42:13.792416 1685746 cri.go:96] found id: ""
	I1222 01:42:13.792440 1685746 logs.go:282] 0 containers: []
	W1222 01:42:13.792448 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:42:13.792455 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:42:13.792521 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:42:13.824151 1685746 cri.go:96] found id: ""
	I1222 01:42:13.824178 1685746 logs.go:282] 0 containers: []
	W1222 01:42:13.824188 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:42:13.824227 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:42:13.824251 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:42:13.839610 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:42:13.839639 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:42:13.903103 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:42:13.894591    7715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:13.895392    7715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:13.896942    7715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:13.897426    7715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:13.898945    7715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:42:13.894591    7715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:13.895392    7715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:13.896942    7715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:13.897426    7715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:13.898945    7715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:42:13.903125 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:42:13.903138 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:42:13.928958 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:42:13.928992 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:42:13.959685 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:42:13.959714 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:42:16.518219 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:42:16.529223 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:42:16.529294 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:42:16.555927 1685746 cri.go:96] found id: ""
	I1222 01:42:16.555953 1685746 logs.go:282] 0 containers: []
	W1222 01:42:16.555962 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:42:16.555969 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:42:16.556028 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:42:16.581196 1685746 cri.go:96] found id: ""
	I1222 01:42:16.581223 1685746 logs.go:282] 0 containers: []
	W1222 01:42:16.581233 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:42:16.581240 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:42:16.581303 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:42:16.607543 1685746 cri.go:96] found id: ""
	I1222 01:42:16.607569 1685746 logs.go:282] 0 containers: []
	W1222 01:42:16.607578 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:42:16.607585 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:42:16.607651 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:42:16.637077 1685746 cri.go:96] found id: ""
	I1222 01:42:16.637106 1685746 logs.go:282] 0 containers: []
	W1222 01:42:16.637116 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:42:16.637123 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:42:16.637183 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:42:16.662155 1685746 cri.go:96] found id: ""
	I1222 01:42:16.662178 1685746 logs.go:282] 0 containers: []
	W1222 01:42:16.662187 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:42:16.662193 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:42:16.662257 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	W1222 01:42:14.749008 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:42:17.249086 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:42:16.694483 1685746 cri.go:96] found id: ""
	I1222 01:42:16.694507 1685746 logs.go:282] 0 containers: []
	W1222 01:42:16.694516 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:42:16.694523 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:42:16.694582 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:42:16.719153 1685746 cri.go:96] found id: ""
	I1222 01:42:16.719178 1685746 logs.go:282] 0 containers: []
	W1222 01:42:16.719188 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:42:16.719195 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:42:16.719258 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:42:16.750982 1685746 cri.go:96] found id: ""
	I1222 01:42:16.751007 1685746 logs.go:282] 0 containers: []
	W1222 01:42:16.751017 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:42:16.751026 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:42:16.751038 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:42:16.809848 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:42:16.809888 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:42:16.828821 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:42:16.828852 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:42:16.896032 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:42:16.888151    7829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:16.888893    7829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:16.890527    7829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:16.890859    7829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:16.892403    7829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:42:16.888151    7829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:16.888893    7829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:16.890527    7829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:16.890859    7829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:16.892403    7829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:42:16.896058 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:42:16.896071 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:42:16.921650 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:42:16.921686 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:42:19.450391 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:42:19.461241 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:42:19.461314 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:42:19.488679 1685746 cri.go:96] found id: ""
	I1222 01:42:19.488705 1685746 logs.go:282] 0 containers: []
	W1222 01:42:19.488715 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:42:19.488722 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:42:19.488784 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:42:19.514947 1685746 cri.go:96] found id: ""
	I1222 01:42:19.514972 1685746 logs.go:282] 0 containers: []
	W1222 01:42:19.514982 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:42:19.514989 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:42:19.515051 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:42:19.541761 1685746 cri.go:96] found id: ""
	I1222 01:42:19.541786 1685746 logs.go:282] 0 containers: []
	W1222 01:42:19.541795 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:42:19.541802 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:42:19.541867 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:42:19.566418 1685746 cri.go:96] found id: ""
	I1222 01:42:19.566441 1685746 logs.go:282] 0 containers: []
	W1222 01:42:19.566450 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:42:19.566456 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:42:19.566515 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:42:19.591707 1685746 cri.go:96] found id: ""
	I1222 01:42:19.591739 1685746 logs.go:282] 0 containers: []
	W1222 01:42:19.591748 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:42:19.591754 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:42:19.591857 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:42:19.618308 1685746 cri.go:96] found id: ""
	I1222 01:42:19.618343 1685746 logs.go:282] 0 containers: []
	W1222 01:42:19.618352 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:42:19.618362 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:42:19.618441 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:42:19.644750 1685746 cri.go:96] found id: ""
	I1222 01:42:19.644791 1685746 logs.go:282] 0 containers: []
	W1222 01:42:19.644801 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:42:19.644808 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:42:19.644883 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:42:19.674267 1685746 cri.go:96] found id: ""
	I1222 01:42:19.674295 1685746 logs.go:282] 0 containers: []
	W1222 01:42:19.674304 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:42:19.674315 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:42:19.674327 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:42:19.689360 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:42:19.689445 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:42:19.766188 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:42:19.757030    7943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:19.758928    7943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:19.760510    7943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:19.760846    7943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:19.762319    7943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:42:19.757030    7943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:19.758928    7943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:19.760510    7943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:19.760846    7943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:19.762319    7943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:42:19.766263 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:42:19.766290 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:42:19.793580 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:42:19.793657 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:42:19.829853 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:42:19.829884 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1222 01:42:19.748284 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:42:22.248100 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:42:22.388471 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:42:22.399089 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:42:22.399192 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:42:22.428498 1685746 cri.go:96] found id: ""
	I1222 01:42:22.428569 1685746 logs.go:282] 0 containers: []
	W1222 01:42:22.428583 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:42:22.428591 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:42:22.428672 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:42:22.458145 1685746 cri.go:96] found id: ""
	I1222 01:42:22.458182 1685746 logs.go:282] 0 containers: []
	W1222 01:42:22.458196 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:42:22.458203 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:42:22.458276 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:42:22.485165 1685746 cri.go:96] found id: ""
	I1222 01:42:22.485202 1685746 logs.go:282] 0 containers: []
	W1222 01:42:22.485212 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:42:22.485218 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:42:22.485283 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:42:22.510263 1685746 cri.go:96] found id: ""
	I1222 01:42:22.510292 1685746 logs.go:282] 0 containers: []
	W1222 01:42:22.510302 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:42:22.510308 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:42:22.510374 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:42:22.539347 1685746 cri.go:96] found id: ""
	I1222 01:42:22.539374 1685746 logs.go:282] 0 containers: []
	W1222 01:42:22.539383 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:42:22.539391 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:42:22.539453 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:42:22.564154 1685746 cri.go:96] found id: ""
	I1222 01:42:22.564182 1685746 logs.go:282] 0 containers: []
	W1222 01:42:22.564193 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:42:22.564205 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:42:22.564311 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:42:22.593661 1685746 cri.go:96] found id: ""
	I1222 01:42:22.593688 1685746 logs.go:282] 0 containers: []
	W1222 01:42:22.593697 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:42:22.593703 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:42:22.593767 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:42:22.618629 1685746 cri.go:96] found id: ""
	I1222 01:42:22.618654 1685746 logs.go:282] 0 containers: []
	W1222 01:42:22.618663 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:42:22.618672 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:42:22.618714 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:42:22.675019 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:42:22.675057 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:42:22.690208 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:42:22.690241 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:42:22.759102 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:42:22.749007    8059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:22.749809    8059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:22.751450    8059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:22.752099    8059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:22.753804    8059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:42:22.749007    8059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:22.749809    8059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:22.751450    8059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:22.752099    8059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:22.753804    8059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:42:22.759127 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:42:22.759140 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:42:22.790419 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:42:22.790453 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:42:25.330239 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:42:25.341121 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:42:25.341190 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:42:25.370417 1685746 cri.go:96] found id: ""
	I1222 01:42:25.370493 1685746 logs.go:282] 0 containers: []
	W1222 01:42:25.370523 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:42:25.370543 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:42:25.370636 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:42:25.399975 1685746 cri.go:96] found id: ""
	I1222 01:42:25.400000 1685746 logs.go:282] 0 containers: []
	W1222 01:42:25.400009 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:42:25.400015 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:42:25.400075 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:42:25.424384 1685746 cri.go:96] found id: ""
	I1222 01:42:25.424414 1685746 logs.go:282] 0 containers: []
	W1222 01:42:25.424424 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:42:25.424431 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:42:25.424491 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:42:25.453828 1685746 cri.go:96] found id: ""
	I1222 01:42:25.453916 1685746 logs.go:282] 0 containers: []
	W1222 01:42:25.453956 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:42:25.453984 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:42:25.454124 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:42:25.480847 1685746 cri.go:96] found id: ""
	I1222 01:42:25.480868 1685746 logs.go:282] 0 containers: []
	W1222 01:42:25.480877 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:42:25.480883 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:42:25.480942 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:42:25.508776 1685746 cri.go:96] found id: ""
	I1222 01:42:25.508801 1685746 logs.go:282] 0 containers: []
	W1222 01:42:25.508810 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:42:25.508817 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:42:25.508877 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:42:25.539362 1685746 cri.go:96] found id: ""
	I1222 01:42:25.539385 1685746 logs.go:282] 0 containers: []
	W1222 01:42:25.539396 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:42:25.539402 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:42:25.539461 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:42:25.566615 1685746 cri.go:96] found id: ""
	I1222 01:42:25.566641 1685746 logs.go:282] 0 containers: []
	W1222 01:42:25.566650 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:42:25.566659 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:42:25.566670 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:42:25.622750 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:42:25.622784 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:42:25.638693 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:42:25.638728 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:42:25.702796 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:42:25.695589    8173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:25.695998    8173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:25.697467    8173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:25.697800    8173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:25.699229    8173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:42:25.695589    8173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:25.695998    8173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:25.697467    8173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:25.697800    8173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:25.699229    8173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:42:25.702823 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:42:25.702835 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:42:25.727901 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:42:25.727938 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 01:42:24.248221 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:42:26.748069 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	W1222 01:42:29.248000 1681323 node_ready.go:55] error getting node "no-preload-154186" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-154186": dial tcp 192.168.85.2:8443: connect: connection refused
	I1222 01:42:31.247763 1681323 node_ready.go:38] duration metric: took 6m0.000217195s for node "no-preload-154186" to be "Ready" ...
	I1222 01:42:31.251066 1681323 out.go:203] 
	W1222 01:42:31.253946 1681323 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1222 01:42:31.253969 1681323 out.go:285] * 
	W1222 01:42:31.256107 1681323 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1222 01:42:31.259342 1681323 out.go:203] 
	I1222 01:42:28.269113 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:42:28.280220 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:42:28.280317 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:42:28.305926 1685746 cri.go:96] found id: ""
	I1222 01:42:28.305948 1685746 logs.go:282] 0 containers: []
	W1222 01:42:28.305957 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:42:28.305963 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:42:28.306020 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:42:28.330985 1685746 cri.go:96] found id: ""
	I1222 01:42:28.331010 1685746 logs.go:282] 0 containers: []
	W1222 01:42:28.331020 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:42:28.331026 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:42:28.331086 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:42:28.357992 1685746 cri.go:96] found id: ""
	I1222 01:42:28.358018 1685746 logs.go:282] 0 containers: []
	W1222 01:42:28.358028 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:42:28.358035 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:42:28.358131 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:42:28.384559 1685746 cri.go:96] found id: ""
	I1222 01:42:28.384585 1685746 logs.go:282] 0 containers: []
	W1222 01:42:28.384594 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:42:28.384603 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:42:28.384665 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:42:28.412628 1685746 cri.go:96] found id: ""
	I1222 01:42:28.412650 1685746 logs.go:282] 0 containers: []
	W1222 01:42:28.412659 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:42:28.412665 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:42:28.412731 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:42:28.438582 1685746 cri.go:96] found id: ""
	I1222 01:42:28.438605 1685746 logs.go:282] 0 containers: []
	W1222 01:42:28.438613 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:42:28.438620 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:42:28.438685 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:42:28.468458 1685746 cri.go:96] found id: ""
	I1222 01:42:28.468484 1685746 logs.go:282] 0 containers: []
	W1222 01:42:28.468493 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:42:28.468500 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:42:28.468565 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:42:28.493207 1685746 cri.go:96] found id: ""
	I1222 01:42:28.493231 1685746 logs.go:282] 0 containers: []
	W1222 01:42:28.493239 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:42:28.493249 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:42:28.493260 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:42:28.547741 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:42:28.547777 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:42:28.562578 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:42:28.562608 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:42:28.637227 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:42:28.629669    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:28.630219    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:28.631680    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:28.632052    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:28.633501    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:42:28.629669    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:28.630219    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:28.631680    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:28.632052    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:28.633501    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:42:28.637250 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:42:28.637263 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:42:28.662593 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:42:28.662632 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:42:31.190941 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:42:31.202783 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:42:31.202858 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:42:31.227601 1685746 cri.go:96] found id: ""
	I1222 01:42:31.227625 1685746 logs.go:282] 0 containers: []
	W1222 01:42:31.227633 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:42:31.227642 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:42:31.227718 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:42:31.267011 1685746 cri.go:96] found id: ""
	I1222 01:42:31.267040 1685746 logs.go:282] 0 containers: []
	W1222 01:42:31.267049 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:42:31.267056 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:42:31.267118 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:42:31.363207 1685746 cri.go:96] found id: ""
	I1222 01:42:31.363231 1685746 logs.go:282] 0 containers: []
	W1222 01:42:31.363239 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:42:31.363246 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:42:31.363320 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:42:31.412753 1685746 cri.go:96] found id: ""
	I1222 01:42:31.412780 1685746 logs.go:282] 0 containers: []
	W1222 01:42:31.412788 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:42:31.412796 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:42:31.412858 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:42:31.453115 1685746 cri.go:96] found id: ""
	I1222 01:42:31.453145 1685746 logs.go:282] 0 containers: []
	W1222 01:42:31.453154 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:42:31.453167 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:42:31.453225 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:42:31.492529 1685746 cri.go:96] found id: ""
	I1222 01:42:31.492550 1685746 logs.go:282] 0 containers: []
	W1222 01:42:31.492558 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:42:31.492565 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:42:31.492621 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:42:31.529156 1685746 cri.go:96] found id: ""
	I1222 01:42:31.529179 1685746 logs.go:282] 0 containers: []
	W1222 01:42:31.529187 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:42:31.529193 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:42:31.529252 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:42:31.561255 1685746 cri.go:96] found id: ""
	I1222 01:42:31.561283 1685746 logs.go:282] 0 containers: []
	W1222 01:42:31.561292 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:42:31.561301 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:42:31.561314 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:42:31.622500 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:42:31.622526 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:42:31.690749 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:42:31.690784 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:42:31.706062 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:42:31.706182 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:42:31.827329 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:42:31.792352    8407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:31.804228    8407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:31.804969    8407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:31.820004    8407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:31.820674    8407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:42:31.792352    8407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:31.804228    8407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:31.804969    8407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:31.820004    8407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:31.820674    8407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:42:31.827354 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:42:31.827369 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:42:34.368888 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:42:34.380077 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:42:34.380154 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:42:34.406174 1685746 cri.go:96] found id: ""
	I1222 01:42:34.406198 1685746 logs.go:282] 0 containers: []
	W1222 01:42:34.406207 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:42:34.406213 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:42:34.406280 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:42:34.437127 1685746 cri.go:96] found id: ""
	I1222 01:42:34.437152 1685746 logs.go:282] 0 containers: []
	W1222 01:42:34.437161 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:42:34.437168 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:42:34.437234 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:42:34.462419 1685746 cri.go:96] found id: ""
	I1222 01:42:34.462445 1685746 logs.go:282] 0 containers: []
	W1222 01:42:34.462454 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:42:34.462463 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:42:34.462524 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:42:34.491011 1685746 cri.go:96] found id: ""
	I1222 01:42:34.491039 1685746 logs.go:282] 0 containers: []
	W1222 01:42:34.491049 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:42:34.491056 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:42:34.491117 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:42:34.515544 1685746 cri.go:96] found id: ""
	I1222 01:42:34.515570 1685746 logs.go:282] 0 containers: []
	W1222 01:42:34.515580 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:42:34.515587 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:42:34.515644 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:42:34.543686 1685746 cri.go:96] found id: ""
	I1222 01:42:34.543714 1685746 logs.go:282] 0 containers: []
	W1222 01:42:34.543722 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:42:34.543730 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:42:34.543788 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:42:34.572402 1685746 cri.go:96] found id: ""
	I1222 01:42:34.572427 1685746 logs.go:282] 0 containers: []
	W1222 01:42:34.572436 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:42:34.572442 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:42:34.572561 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:42:34.597762 1685746 cri.go:96] found id: ""
	I1222 01:42:34.597789 1685746 logs.go:282] 0 containers: []
	W1222 01:42:34.597799 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:42:34.597808 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:42:34.597820 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:42:34.622955 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:42:34.622991 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:42:34.651563 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:42:34.651592 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:42:34.708102 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:42:34.708139 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:42:34.723329 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:42:34.723358 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:42:34.788870 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:42:34.780572    8523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:34.781230    8523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:34.782852    8523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:34.783472    8523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:34.785097    8523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:42:34.780572    8523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:34.781230    8523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:34.782852    8523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:34.783472    8523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:34.785097    8523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:42:37.289033 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:42:37.307914 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:42:37.308010 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:42:37.342876 1685746 cri.go:96] found id: ""
	I1222 01:42:37.342916 1685746 logs.go:282] 0 containers: []
	W1222 01:42:37.342925 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:42:37.342932 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:42:37.342994 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:42:37.369883 1685746 cri.go:96] found id: ""
	I1222 01:42:37.369912 1685746 logs.go:282] 0 containers: []
	W1222 01:42:37.369921 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:42:37.369928 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:42:37.369990 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:42:37.399765 1685746 cri.go:96] found id: ""
	I1222 01:42:37.399792 1685746 logs.go:282] 0 containers: []
	W1222 01:42:37.399800 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:42:37.399807 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:42:37.399887 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:42:37.425866 1685746 cri.go:96] found id: ""
	I1222 01:42:37.425894 1685746 logs.go:282] 0 containers: []
	W1222 01:42:37.425904 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:42:37.425911 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:42:37.425976 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:42:37.452177 1685746 cri.go:96] found id: ""
	I1222 01:42:37.452252 1685746 logs.go:282] 0 containers: []
	W1222 01:42:37.452273 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:42:37.452280 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:42:37.452349 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:42:37.478374 1685746 cri.go:96] found id: ""
	I1222 01:42:37.478405 1685746 logs.go:282] 0 containers: []
	W1222 01:42:37.478415 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:42:37.478421 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:42:37.478482 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:42:37.504627 1685746 cri.go:96] found id: ""
	I1222 01:42:37.504663 1685746 logs.go:282] 0 containers: []
	W1222 01:42:37.504672 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:42:37.504679 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:42:37.504785 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:42:37.531304 1685746 cri.go:96] found id: ""
	I1222 01:42:37.531343 1685746 logs.go:282] 0 containers: []
	W1222 01:42:37.531353 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:42:37.531380 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:42:37.531399 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:42:37.559371 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:42:37.559401 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:42:37.614026 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:42:37.614064 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:42:37.630657 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:42:37.630689 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:42:37.698972 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:42:37.690733    8633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:37.691573    8633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:37.693410    8633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:37.694215    8633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:37.695368    8633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:42:37.690733    8633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:37.691573    8633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:37.693410    8633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:37.694215    8633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:37.695368    8633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:42:37.698998 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:42:37.699010 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:42:40.226630 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:42:40.251806 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:42:40.251880 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:42:40.312461 1685746 cri.go:96] found id: ""
	I1222 01:42:40.312484 1685746 logs.go:282] 0 containers: []
	W1222 01:42:40.312493 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:42:40.312499 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:42:40.312559 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:42:40.346654 1685746 cri.go:96] found id: ""
	I1222 01:42:40.346682 1685746 logs.go:282] 0 containers: []
	W1222 01:42:40.346691 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:42:40.346697 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:42:40.346757 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:42:40.376245 1685746 cri.go:96] found id: ""
	I1222 01:42:40.376279 1685746 logs.go:282] 0 containers: []
	W1222 01:42:40.376288 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:42:40.376294 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:42:40.376355 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:42:40.400546 1685746 cri.go:96] found id: ""
	I1222 01:42:40.400572 1685746 logs.go:282] 0 containers: []
	W1222 01:42:40.400581 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:42:40.400588 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:42:40.400647 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:42:40.425326 1685746 cri.go:96] found id: ""
	I1222 01:42:40.425353 1685746 logs.go:282] 0 containers: []
	W1222 01:42:40.425362 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:42:40.425369 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:42:40.425431 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:42:40.449304 1685746 cri.go:96] found id: ""
	I1222 01:42:40.449328 1685746 logs.go:282] 0 containers: []
	W1222 01:42:40.449337 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:42:40.449345 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:42:40.449405 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:42:40.474828 1685746 cri.go:96] found id: ""
	I1222 01:42:40.474854 1685746 logs.go:282] 0 containers: []
	W1222 01:42:40.474863 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:42:40.474870 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:42:40.474931 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:42:40.503909 1685746 cri.go:96] found id: ""
	I1222 01:42:40.503933 1685746 logs.go:282] 0 containers: []
	W1222 01:42:40.503941 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:42:40.503950 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:42:40.503960 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:42:40.559784 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:42:40.559821 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:42:40.575010 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:42:40.575041 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:42:40.643863 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:42:40.635244    8736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:40.635950    8736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:40.637730    8736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:40.638261    8736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:40.639693    8736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:42:40.635244    8736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:40.635950    8736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:40.637730    8736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:40.638261    8736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:40.639693    8736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:42:40.643888 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:42:40.643900 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:42:40.674641 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:42:40.674683 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:42:43.208931 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:42:43.219892 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:42:43.219965 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:42:43.278356 1685746 cri.go:96] found id: ""
	I1222 01:42:43.278383 1685746 logs.go:282] 0 containers: []
	W1222 01:42:43.278393 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:42:43.278399 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:42:43.278468 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:42:43.318802 1685746 cri.go:96] found id: ""
	I1222 01:42:43.318828 1685746 logs.go:282] 0 containers: []
	W1222 01:42:43.318838 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:42:43.318844 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:42:43.318903 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:42:43.351222 1685746 cri.go:96] found id: ""
	I1222 01:42:43.351247 1685746 logs.go:282] 0 containers: []
	W1222 01:42:43.351256 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:42:43.351263 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:42:43.351323 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:42:43.377242 1685746 cri.go:96] found id: ""
	I1222 01:42:43.377267 1685746 logs.go:282] 0 containers: []
	W1222 01:42:43.377275 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:42:43.377282 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:42:43.377346 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:42:43.403326 1685746 cri.go:96] found id: ""
	I1222 01:42:43.403353 1685746 logs.go:282] 0 containers: []
	W1222 01:42:43.403363 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:42:43.403370 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:42:43.403459 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:42:43.429205 1685746 cri.go:96] found id: ""
	I1222 01:42:43.429232 1685746 logs.go:282] 0 containers: []
	W1222 01:42:43.429241 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:42:43.429248 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:42:43.429351 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:42:43.455157 1685746 cri.go:96] found id: ""
	I1222 01:42:43.455188 1685746 logs.go:282] 0 containers: []
	W1222 01:42:43.455198 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:42:43.455204 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:42:43.455274 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:42:43.484817 1685746 cri.go:96] found id: ""
	I1222 01:42:43.484846 1685746 logs.go:282] 0 containers: []
	W1222 01:42:43.484856 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:42:43.484866 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:42:43.484877 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:42:43.544248 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:42:43.544285 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:42:43.559152 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:42:43.559184 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:42:43.623520 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:42:43.614277    8850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:43.615133    8850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:43.616676    8850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:43.617246    8850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:43.619047    8850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:42:43.614277    8850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:43.615133    8850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:43.616676    8850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:43.617246    8850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:43.619047    8850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:42:43.623546 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:42:43.623559 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:42:43.648911 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:42:43.648951 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:42:46.182386 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:42:46.193692 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:42:46.193766 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:42:46.219554 1685746 cri.go:96] found id: ""
	I1222 01:42:46.219592 1685746 logs.go:282] 0 containers: []
	W1222 01:42:46.219602 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:42:46.219608 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:42:46.219667 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:42:46.269097 1685746 cri.go:96] found id: ""
	I1222 01:42:46.269128 1685746 logs.go:282] 0 containers: []
	W1222 01:42:46.269137 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:42:46.269152 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:42:46.269215 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:42:46.315573 1685746 cri.go:96] found id: ""
	I1222 01:42:46.315609 1685746 logs.go:282] 0 containers: []
	W1222 01:42:46.315619 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:42:46.315627 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:42:46.315698 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:42:46.354254 1685746 cri.go:96] found id: ""
	I1222 01:42:46.354291 1685746 logs.go:282] 0 containers: []
	W1222 01:42:46.354300 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:42:46.354311 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:42:46.354385 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:42:46.382733 1685746 cri.go:96] found id: ""
	I1222 01:42:46.382810 1685746 logs.go:282] 0 containers: []
	W1222 01:42:46.382823 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:42:46.382831 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:42:46.382893 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:42:46.409988 1685746 cri.go:96] found id: ""
	I1222 01:42:46.410014 1685746 logs.go:282] 0 containers: []
	W1222 01:42:46.410024 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:42:46.410032 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:42:46.410123 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:42:46.440621 1685746 cri.go:96] found id: ""
	I1222 01:42:46.440645 1685746 logs.go:282] 0 containers: []
	W1222 01:42:46.440654 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:42:46.440661 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:42:46.440726 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:42:46.466426 1685746 cri.go:96] found id: ""
	I1222 01:42:46.466451 1685746 logs.go:282] 0 containers: []
	W1222 01:42:46.466461 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:42:46.466478 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:42:46.466491 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:42:46.522404 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:42:46.522449 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:42:46.538001 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:42:46.538129 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:42:46.608273 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:42:46.599659    8965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:46.600513    8965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:46.602269    8965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:46.602918    8965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:46.604666    8965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:42:46.599659    8965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:46.600513    8965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:46.602269    8965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:46.602918    8965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:46.604666    8965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:42:46.608296 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:42:46.608311 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:42:46.634354 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:42:46.634388 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:42:49.167965 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:42:49.178919 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:42:49.178992 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:42:49.204884 1685746 cri.go:96] found id: ""
	I1222 01:42:49.204909 1685746 logs.go:282] 0 containers: []
	W1222 01:42:49.204917 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:42:49.204924 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:42:49.204992 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:42:49.231503 1685746 cri.go:96] found id: ""
	I1222 01:42:49.231530 1685746 logs.go:282] 0 containers: []
	W1222 01:42:49.231539 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:42:49.231547 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:42:49.231611 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:42:49.274476 1685746 cri.go:96] found id: ""
	I1222 01:42:49.274500 1685746 logs.go:282] 0 containers: []
	W1222 01:42:49.274508 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:42:49.274515 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:42:49.274577 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:42:49.318032 1685746 cri.go:96] found id: ""
	I1222 01:42:49.318054 1685746 logs.go:282] 0 containers: []
	W1222 01:42:49.318063 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:42:49.318069 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:42:49.318163 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:42:49.361375 1685746 cri.go:96] found id: ""
	I1222 01:42:49.361398 1685746 logs.go:282] 0 containers: []
	W1222 01:42:49.361407 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:42:49.361414 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:42:49.361475 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:42:49.389203 1685746 cri.go:96] found id: ""
	I1222 01:42:49.389230 1685746 logs.go:282] 0 containers: []
	W1222 01:42:49.389240 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:42:49.389247 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:42:49.389315 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:42:49.419554 1685746 cri.go:96] found id: ""
	I1222 01:42:49.419579 1685746 logs.go:282] 0 containers: []
	W1222 01:42:49.419588 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:42:49.419595 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:42:49.419656 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:42:49.448457 1685746 cri.go:96] found id: ""
	I1222 01:42:49.448482 1685746 logs.go:282] 0 containers: []
	W1222 01:42:49.448491 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:42:49.448501 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:42:49.448513 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:42:49.477586 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:42:49.477616 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:42:49.534782 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:42:49.534822 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:42:49.550136 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:42:49.550166 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:42:49.618143 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:42:49.609211    9092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:49.610126    9092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:49.611985    9092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:49.612723    9092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:49.614504    9092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:42:49.609211    9092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:49.610126    9092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:49.611985    9092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:49.612723    9092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:49.614504    9092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:42:49.618169 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:42:49.618190 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:42:52.144370 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:42:52.155874 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:42:52.155999 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:42:52.183608 1685746 cri.go:96] found id: ""
	I1222 01:42:52.183633 1685746 logs.go:282] 0 containers: []
	W1222 01:42:52.183641 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:42:52.183648 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:42:52.183710 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:42:52.213975 1685746 cri.go:96] found id: ""
	I1222 01:42:52.214002 1685746 logs.go:282] 0 containers: []
	W1222 01:42:52.214011 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:42:52.214018 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:42:52.214108 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:42:52.260878 1685746 cri.go:96] found id: ""
	I1222 01:42:52.260904 1685746 logs.go:282] 0 containers: []
	W1222 01:42:52.260913 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:42:52.260920 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:42:52.260986 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:42:52.326163 1685746 cri.go:96] found id: ""
	I1222 01:42:52.326191 1685746 logs.go:282] 0 containers: []
	W1222 01:42:52.326200 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:42:52.326206 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:42:52.326268 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:42:52.351586 1685746 cri.go:96] found id: ""
	I1222 01:42:52.351610 1685746 logs.go:282] 0 containers: []
	W1222 01:42:52.351619 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:42:52.351625 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:42:52.351685 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:42:52.378191 1685746 cri.go:96] found id: ""
	I1222 01:42:52.378271 1685746 logs.go:282] 0 containers: []
	W1222 01:42:52.378297 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:42:52.378320 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:42:52.378423 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:42:52.403988 1685746 cri.go:96] found id: ""
	I1222 01:42:52.404014 1685746 logs.go:282] 0 containers: []
	W1222 01:42:52.404024 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:42:52.404030 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:42:52.404115 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:42:52.434842 1685746 cri.go:96] found id: ""
	I1222 01:42:52.434870 1685746 logs.go:282] 0 containers: []
	W1222 01:42:52.434879 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:42:52.434888 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:42:52.434901 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:42:52.493615 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:42:52.493659 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:42:52.509970 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:42:52.510008 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:42:52.573713 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:42:52.565508    9193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:52.566173    9193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:52.567830    9193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:52.568498    9193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:52.570110    9193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:42:52.565508    9193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:52.566173    9193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:52.567830    9193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:52.568498    9193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:52.570110    9193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:42:52.573748 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:42:52.573760 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:42:52.598497 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:42:52.598532 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:42:55.130037 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:42:55.141017 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:42:55.141094 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:42:55.166253 1685746 cri.go:96] found id: ""
	I1222 01:42:55.166279 1685746 logs.go:282] 0 containers: []
	W1222 01:42:55.166289 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:42:55.166298 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:42:55.166358 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:42:55.190818 1685746 cri.go:96] found id: ""
	I1222 01:42:55.190844 1685746 logs.go:282] 0 containers: []
	W1222 01:42:55.190856 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:42:55.190863 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:42:55.190969 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:42:55.216347 1685746 cri.go:96] found id: ""
	I1222 01:42:55.216380 1685746 logs.go:282] 0 containers: []
	W1222 01:42:55.216390 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:42:55.216397 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:42:55.216501 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:42:55.259015 1685746 cri.go:96] found id: ""
	I1222 01:42:55.259091 1685746 logs.go:282] 0 containers: []
	W1222 01:42:55.259115 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:42:55.259135 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:42:55.259247 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:42:55.326026 1685746 cri.go:96] found id: ""
	I1222 01:42:55.326049 1685746 logs.go:282] 0 containers: []
	W1222 01:42:55.326058 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:42:55.326065 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:42:55.326147 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:42:55.350799 1685746 cri.go:96] found id: ""
	I1222 01:42:55.350823 1685746 logs.go:282] 0 containers: []
	W1222 01:42:55.350832 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:42:55.350839 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:42:55.350899 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:42:55.376097 1685746 cri.go:96] found id: ""
	I1222 01:42:55.376123 1685746 logs.go:282] 0 containers: []
	W1222 01:42:55.376133 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:42:55.376139 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:42:55.376200 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:42:55.401620 1685746 cri.go:96] found id: ""
	I1222 01:42:55.401693 1685746 logs.go:282] 0 containers: []
	W1222 01:42:55.401715 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:42:55.401740 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:42:55.401783 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:42:55.434315 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:42:55.434343 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:42:55.489616 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:42:55.489652 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:42:55.504798 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:42:55.504829 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:42:55.569246 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:42:55.560833    9315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:55.561495    9315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:55.563008    9315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:55.563457    9315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:55.564945    9315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:42:55.560833    9315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:55.561495    9315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:55.563008    9315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:55.563457    9315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:55.564945    9315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:42:55.569273 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:42:55.569285 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:42:58.094905 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:42:58.105827 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:42:58.105902 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:42:58.131496 1685746 cri.go:96] found id: ""
	I1222 01:42:58.131522 1685746 logs.go:282] 0 containers: []
	W1222 01:42:58.131531 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:42:58.131538 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:42:58.131602 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:42:58.156152 1685746 cri.go:96] found id: ""
	I1222 01:42:58.156179 1685746 logs.go:282] 0 containers: []
	W1222 01:42:58.156188 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:42:58.156195 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:42:58.156253 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:42:58.182075 1685746 cri.go:96] found id: ""
	I1222 01:42:58.182124 1685746 logs.go:282] 0 containers: []
	W1222 01:42:58.182140 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:42:58.182147 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:42:58.182211 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:42:58.212714 1685746 cri.go:96] found id: ""
	I1222 01:42:58.212737 1685746 logs.go:282] 0 containers: []
	W1222 01:42:58.212746 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:42:58.212752 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:42:58.212811 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:42:58.256896 1685746 cri.go:96] found id: ""
	I1222 01:42:58.256919 1685746 logs.go:282] 0 containers: []
	W1222 01:42:58.256931 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:42:58.256938 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:42:58.257002 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:42:58.314212 1685746 cri.go:96] found id: ""
	I1222 01:42:58.314235 1685746 logs.go:282] 0 containers: []
	W1222 01:42:58.314243 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:42:58.314250 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:42:58.314311 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:42:58.348822 1685746 cri.go:96] found id: ""
	I1222 01:42:58.348844 1685746 logs.go:282] 0 containers: []
	W1222 01:42:58.348853 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:42:58.348860 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:42:58.349006 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:42:58.375112 1685746 cri.go:96] found id: ""
	I1222 01:42:58.375139 1685746 logs.go:282] 0 containers: []
	W1222 01:42:58.375148 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:42:58.375157 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:42:58.375199 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:42:58.440769 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:42:58.432188    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:58.432720    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:58.434403    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:58.435024    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:58.436797    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:42:58.432188    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:58.432720    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:58.434403    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:58.435024    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:42:58.436797    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:42:58.440793 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:42:58.440807 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:42:58.466180 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:42:58.466214 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:42:58.498249 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:42:58.498277 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:42:58.553912 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:42:58.553948 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:43:01.069587 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:43:01.080494 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:43:01.080569 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:43:01.106366 1685746 cri.go:96] found id: ""
	I1222 01:43:01.106393 1685746 logs.go:282] 0 containers: []
	W1222 01:43:01.106403 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:43:01.106409 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:43:01.106472 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:43:01.134991 1685746 cri.go:96] found id: ""
	I1222 01:43:01.135019 1685746 logs.go:282] 0 containers: []
	W1222 01:43:01.135028 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:43:01.135040 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:43:01.135108 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:43:01.161160 1685746 cri.go:96] found id: ""
	I1222 01:43:01.161188 1685746 logs.go:282] 0 containers: []
	W1222 01:43:01.161198 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:43:01.161205 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:43:01.161268 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:43:01.189244 1685746 cri.go:96] found id: ""
	I1222 01:43:01.189271 1685746 logs.go:282] 0 containers: []
	W1222 01:43:01.189281 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:43:01.189288 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:43:01.189353 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:43:01.216039 1685746 cri.go:96] found id: ""
	I1222 01:43:01.216109 1685746 logs.go:282] 0 containers: []
	W1222 01:43:01.216123 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:43:01.216131 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:43:01.216206 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:43:01.255772 1685746 cri.go:96] found id: ""
	I1222 01:43:01.255803 1685746 logs.go:282] 0 containers: []
	W1222 01:43:01.255812 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:43:01.255818 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:43:01.255880 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:43:01.331745 1685746 cri.go:96] found id: ""
	I1222 01:43:01.331771 1685746 logs.go:282] 0 containers: []
	W1222 01:43:01.331780 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:43:01.331787 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:43:01.331856 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:43:01.360958 1685746 cri.go:96] found id: ""
	I1222 01:43:01.360985 1685746 logs.go:282] 0 containers: []
	W1222 01:43:01.360995 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:43:01.361003 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:43:01.361014 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:43:01.416443 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:43:01.416479 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:43:01.433706 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:43:01.433735 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:43:01.504365 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:43:01.496722    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:01.497337    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:01.498577    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:01.499029    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:01.500569    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:43:01.496722    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:01.497337    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:01.498577    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:01.499029    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:01.500569    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:43:01.504393 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:43:01.504405 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:43:01.530386 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:43:01.530421 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:43:04.060702 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:43:04.074701 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:43:04.074781 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:43:04.104007 1685746 cri.go:96] found id: ""
	I1222 01:43:04.104034 1685746 logs.go:282] 0 containers: []
	W1222 01:43:04.104043 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:43:04.104050 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:43:04.104110 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:43:04.129051 1685746 cri.go:96] found id: ""
	I1222 01:43:04.129081 1685746 logs.go:282] 0 containers: []
	W1222 01:43:04.129091 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:43:04.129098 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:43:04.129160 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:43:04.155234 1685746 cri.go:96] found id: ""
	I1222 01:43:04.155260 1685746 logs.go:282] 0 containers: []
	W1222 01:43:04.155275 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:43:04.155282 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:43:04.155344 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:43:04.180095 1685746 cri.go:96] found id: ""
	I1222 01:43:04.180120 1685746 logs.go:282] 0 containers: []
	W1222 01:43:04.180130 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:43:04.180137 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:43:04.180199 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:43:04.204953 1685746 cri.go:96] found id: ""
	I1222 01:43:04.204976 1685746 logs.go:282] 0 containers: []
	W1222 01:43:04.204984 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:43:04.204991 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:43:04.205052 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:43:04.231351 1685746 cri.go:96] found id: ""
	I1222 01:43:04.231376 1685746 logs.go:282] 0 containers: []
	W1222 01:43:04.231385 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:43:04.231392 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:43:04.231452 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:43:04.269450 1685746 cri.go:96] found id: ""
	I1222 01:43:04.269476 1685746 logs.go:282] 0 containers: []
	W1222 01:43:04.269485 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:43:04.269492 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:43:04.269556 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:43:04.310137 1685746 cri.go:96] found id: ""
	I1222 01:43:04.310210 1685746 logs.go:282] 0 containers: []
	W1222 01:43:04.310247 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:43:04.310276 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:43:04.310304 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:43:04.330066 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:43:04.330204 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:43:04.398531 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:43:04.389653    9635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:04.390276    9635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:04.391374    9635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:04.392896    9635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:04.393293    9635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:43:04.389653    9635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:04.390276    9635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:04.391374    9635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:04.392896    9635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:04.393293    9635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:43:04.398600 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:43:04.398622 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:43:04.423684 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:43:04.423715 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:43:04.455847 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:43:04.455915 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:43:07.011267 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:43:07.022247 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:43:07.022373 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:43:07.047710 1685746 cri.go:96] found id: ""
	I1222 01:43:07.047737 1685746 logs.go:282] 0 containers: []
	W1222 01:43:07.047746 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:43:07.047755 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:43:07.047817 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:43:07.071622 1685746 cri.go:96] found id: ""
	I1222 01:43:07.071644 1685746 logs.go:282] 0 containers: []
	W1222 01:43:07.071653 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:43:07.071662 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:43:07.071724 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:43:07.100514 1685746 cri.go:96] found id: ""
	I1222 01:43:07.100539 1685746 logs.go:282] 0 containers: []
	W1222 01:43:07.100548 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:43:07.100555 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:43:07.100622 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:43:07.126740 1685746 cri.go:96] found id: ""
	I1222 01:43:07.126810 1685746 logs.go:282] 0 containers: []
	W1222 01:43:07.126833 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:43:07.126845 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:43:07.126921 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:43:07.156147 1685746 cri.go:96] found id: ""
	I1222 01:43:07.156174 1685746 logs.go:282] 0 containers: []
	W1222 01:43:07.156184 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:43:07.156190 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:43:07.156268 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:43:07.185551 1685746 cri.go:96] found id: ""
	I1222 01:43:07.185574 1685746 logs.go:282] 0 containers: []
	W1222 01:43:07.185583 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:43:07.185589 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:43:07.185670 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:43:07.210495 1685746 cri.go:96] found id: ""
	I1222 01:43:07.210563 1685746 logs.go:282] 0 containers: []
	W1222 01:43:07.210585 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:43:07.210608 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:43:07.210679 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:43:07.234671 1685746 cri.go:96] found id: ""
	I1222 01:43:07.234751 1685746 logs.go:282] 0 containers: []
	W1222 01:43:07.234775 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:43:07.234799 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:43:07.234847 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:43:07.318902 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:43:07.318936 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:43:07.334947 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:43:07.334977 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:43:07.400498 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:43:07.392484    9748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:07.392898    9748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:07.394668    9748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:07.395165    9748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:07.396785    9748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:43:07.392484    9748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:07.392898    9748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:07.394668    9748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:07.395165    9748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:07.396785    9748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:43:07.400520 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:43:07.400534 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:43:07.425576 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:43:07.425613 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:43:09.957230 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:43:09.968065 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:43:09.968142 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:43:09.993760 1685746 cri.go:96] found id: ""
	I1222 01:43:09.993785 1685746 logs.go:282] 0 containers: []
	W1222 01:43:09.993794 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:43:09.993802 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:43:09.993870 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:43:10.024110 1685746 cri.go:96] found id: ""
	I1222 01:43:10.024140 1685746 logs.go:282] 0 containers: []
	W1222 01:43:10.024151 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:43:10.024157 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:43:10.024232 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:43:10.053092 1685746 cri.go:96] found id: ""
	I1222 01:43:10.053122 1685746 logs.go:282] 0 containers: []
	W1222 01:43:10.053132 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:43:10.053138 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:43:10.053203 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:43:10.078967 1685746 cri.go:96] found id: ""
	I1222 01:43:10.078994 1685746 logs.go:282] 0 containers: []
	W1222 01:43:10.079004 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:43:10.079011 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:43:10.079079 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:43:10.105969 1685746 cri.go:96] found id: ""
	I1222 01:43:10.105993 1685746 logs.go:282] 0 containers: []
	W1222 01:43:10.106001 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:43:10.106008 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:43:10.106164 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:43:10.132413 1685746 cri.go:96] found id: ""
	I1222 01:43:10.132448 1685746 logs.go:282] 0 containers: []
	W1222 01:43:10.132457 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:43:10.132464 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:43:10.132526 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:43:10.158912 1685746 cri.go:96] found id: ""
	I1222 01:43:10.158941 1685746 logs.go:282] 0 containers: []
	W1222 01:43:10.158950 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:43:10.158957 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:43:10.159038 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:43:10.185594 1685746 cri.go:96] found id: ""
	I1222 01:43:10.185621 1685746 logs.go:282] 0 containers: []
	W1222 01:43:10.185630 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:43:10.185639 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:43:10.185681 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:43:10.214349 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:43:10.214378 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:43:10.274002 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:43:10.274096 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:43:10.289686 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:43:10.289761 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:43:10.375337 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:43:10.363816    9875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:10.364613    9875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:10.366668    9875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:10.368169    9875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:10.370480    9875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:43:10.363816    9875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:10.364613    9875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:10.366668    9875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:10.368169    9875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:10.370480    9875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:43:10.375413 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:43:10.375441 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:43:12.901196 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:43:12.911625 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:43:12.911710 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:43:12.936713 1685746 cri.go:96] found id: ""
	I1222 01:43:12.936738 1685746 logs.go:282] 0 containers: []
	W1222 01:43:12.936747 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:43:12.936753 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:43:12.936827 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:43:12.961849 1685746 cri.go:96] found id: ""
	I1222 01:43:12.961870 1685746 logs.go:282] 0 containers: []
	W1222 01:43:12.961879 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:43:12.961888 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:43:12.961950 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:43:12.990893 1685746 cri.go:96] found id: ""
	I1222 01:43:12.990919 1685746 logs.go:282] 0 containers: []
	W1222 01:43:12.990929 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:43:12.990935 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:43:12.990996 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:43:13.033584 1685746 cri.go:96] found id: ""
	I1222 01:43:13.033611 1685746 logs.go:282] 0 containers: []
	W1222 01:43:13.033621 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:43:13.033628 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:43:13.033691 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:43:13.062192 1685746 cri.go:96] found id: ""
	I1222 01:43:13.062216 1685746 logs.go:282] 0 containers: []
	W1222 01:43:13.062225 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:43:13.062232 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:43:13.062297 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:43:13.088173 1685746 cri.go:96] found id: ""
	I1222 01:43:13.088213 1685746 logs.go:282] 0 containers: []
	W1222 01:43:13.088223 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:43:13.088230 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:43:13.088312 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:43:13.115014 1685746 cri.go:96] found id: ""
	I1222 01:43:13.115051 1685746 logs.go:282] 0 containers: []
	W1222 01:43:13.115062 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:43:13.115069 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:43:13.115147 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:43:13.140656 1685746 cri.go:96] found id: ""
	I1222 01:43:13.140691 1685746 logs.go:282] 0 containers: []
	W1222 01:43:13.140700 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:43:13.140710 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:43:13.140722 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:43:13.177585 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:43:13.177660 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:43:13.233128 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:43:13.233162 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:43:13.251827 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:43:13.251907 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:43:13.360494 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:43:13.352114    9990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:13.352851    9990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:13.354529    9990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:13.355189    9990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:13.356849    9990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:43:13.352114    9990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:13.352851    9990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:13.354529    9990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:13.355189    9990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:13.356849    9990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:43:13.360570 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:43:13.360589 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:43:15.887876 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:43:15.898631 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:43:15.898708 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:43:15.923707 1685746 cri.go:96] found id: ""
	I1222 01:43:15.923732 1685746 logs.go:282] 0 containers: []
	W1222 01:43:15.923743 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:43:15.923750 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:43:15.923829 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:43:15.950453 1685746 cri.go:96] found id: ""
	I1222 01:43:15.950478 1685746 logs.go:282] 0 containers: []
	W1222 01:43:15.950492 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:43:15.950498 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:43:15.950612 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:43:15.975355 1685746 cri.go:96] found id: ""
	I1222 01:43:15.975436 1685746 logs.go:282] 0 containers: []
	W1222 01:43:15.975460 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:43:15.975475 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:43:15.975549 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:43:16.000992 1685746 cri.go:96] found id: ""
	I1222 01:43:16.001026 1685746 logs.go:282] 0 containers: []
	W1222 01:43:16.001036 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:43:16.001043 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:43:16.001134 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:43:16.033538 1685746 cri.go:96] found id: ""
	I1222 01:43:16.033563 1685746 logs.go:282] 0 containers: []
	W1222 01:43:16.033572 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:43:16.033578 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:43:16.033641 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:43:16.059451 1685746 cri.go:96] found id: ""
	I1222 01:43:16.059476 1685746 logs.go:282] 0 containers: []
	W1222 01:43:16.059486 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:43:16.059492 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:43:16.059556 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:43:16.085491 1685746 cri.go:96] found id: ""
	I1222 01:43:16.085515 1685746 logs.go:282] 0 containers: []
	W1222 01:43:16.085524 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:43:16.085530 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:43:16.085598 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:43:16.111197 1685746 cri.go:96] found id: ""
	I1222 01:43:16.111220 1685746 logs.go:282] 0 containers: []
	W1222 01:43:16.111228 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:43:16.111237 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:43:16.111249 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:43:16.167058 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:43:16.167095 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:43:16.182867 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:43:16.182947 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:43:16.303679 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:43:16.271313   10085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:16.272086   10085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:16.276862   10085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:16.277203   10085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:16.278713   10085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:43:16.271313   10085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:16.272086   10085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:16.276862   10085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:16.277203   10085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:16.278713   10085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:43:16.303753 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:43:16.303780 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:43:16.336416 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:43:16.336497 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:43:18.869703 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:43:18.880527 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:43:18.880602 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:43:18.906051 1685746 cri.go:96] found id: ""
	I1222 01:43:18.906102 1685746 logs.go:282] 0 containers: []
	W1222 01:43:18.906112 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:43:18.906119 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:43:18.906181 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:43:18.931999 1685746 cri.go:96] found id: ""
	I1222 01:43:18.932027 1685746 logs.go:282] 0 containers: []
	W1222 01:43:18.932036 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:43:18.932043 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:43:18.932110 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:43:18.959202 1685746 cri.go:96] found id: ""
	I1222 01:43:18.959230 1685746 logs.go:282] 0 containers: []
	W1222 01:43:18.959239 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:43:18.959246 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:43:18.959307 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:43:18.988050 1685746 cri.go:96] found id: ""
	I1222 01:43:18.988075 1685746 logs.go:282] 0 containers: []
	W1222 01:43:18.988084 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:43:18.988091 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:43:18.988179 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:43:19.014062 1685746 cri.go:96] found id: ""
	I1222 01:43:19.014116 1685746 logs.go:282] 0 containers: []
	W1222 01:43:19.014125 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:43:19.014132 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:43:19.014197 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:43:19.041419 1685746 cri.go:96] found id: ""
	I1222 01:43:19.041454 1685746 logs.go:282] 0 containers: []
	W1222 01:43:19.041464 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:43:19.041471 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:43:19.041548 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:43:19.067079 1685746 cri.go:96] found id: ""
	I1222 01:43:19.067114 1685746 logs.go:282] 0 containers: []
	W1222 01:43:19.067123 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:43:19.067130 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:43:19.067199 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:43:19.093005 1685746 cri.go:96] found id: ""
	I1222 01:43:19.093041 1685746 logs.go:282] 0 containers: []
	W1222 01:43:19.093050 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:43:19.093059 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:43:19.093070 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:43:19.148083 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:43:19.148119 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:43:19.163510 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:43:19.163547 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:43:19.228482 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:43:19.219765   10199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:19.220173   10199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:19.221836   10199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:19.222307   10199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:19.223917   10199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:43:19.219765   10199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:19.220173   10199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:19.221836   10199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:19.222307   10199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:19.223917   10199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:43:19.228505 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:43:19.228519 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:43:19.264345 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:43:19.264402 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:43:21.823213 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:43:21.834353 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:43:21.834427 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:43:21.860777 1685746 cri.go:96] found id: ""
	I1222 01:43:21.860805 1685746 logs.go:282] 0 containers: []
	W1222 01:43:21.860815 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:43:21.860823 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:43:21.860889 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:43:21.889075 1685746 cri.go:96] found id: ""
	I1222 01:43:21.889150 1685746 logs.go:282] 0 containers: []
	W1222 01:43:21.889173 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:43:21.889195 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:43:21.889284 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:43:21.915306 1685746 cri.go:96] found id: ""
	I1222 01:43:21.915334 1685746 logs.go:282] 0 containers: []
	W1222 01:43:21.915343 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:43:21.915349 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:43:21.915413 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:43:21.940239 1685746 cri.go:96] found id: ""
	I1222 01:43:21.940610 1685746 logs.go:282] 0 containers: []
	W1222 01:43:21.940624 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:43:21.940633 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:43:21.940694 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:43:21.966280 1685746 cri.go:96] found id: ""
	I1222 01:43:21.966307 1685746 logs.go:282] 0 containers: []
	W1222 01:43:21.966316 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:43:21.966323 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:43:21.966392 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:43:21.991888 1685746 cri.go:96] found id: ""
	I1222 01:43:21.991916 1685746 logs.go:282] 0 containers: []
	W1222 01:43:21.991925 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:43:21.991934 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:43:21.991993 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:43:22.021851 1685746 cri.go:96] found id: ""
	I1222 01:43:22.021878 1685746 logs.go:282] 0 containers: []
	W1222 01:43:22.021888 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:43:22.021895 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:43:22.021962 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:43:22.052435 1685746 cri.go:96] found id: ""
	I1222 01:43:22.052464 1685746 logs.go:282] 0 containers: []
	W1222 01:43:22.052473 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:43:22.052483 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:43:22.052495 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:43:22.128628 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:43:22.119151   10309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:22.120170   10309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:22.122258   10309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:22.122895   10309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:22.124723   10309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:43:22.119151   10309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:22.120170   10309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:22.122258   10309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:22.122895   10309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:22.124723   10309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:43:22.128653 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:43:22.128668 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:43:22.154140 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:43:22.154180 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:43:22.190762 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:43:22.190790 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:43:22.254223 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:43:22.254264 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:43:24.790679 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:43:24.801308 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:43:24.801380 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:43:24.826466 1685746 cri.go:96] found id: ""
	I1222 01:43:24.826492 1685746 logs.go:282] 0 containers: []
	W1222 01:43:24.826501 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:43:24.826508 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:43:24.826573 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:43:24.852169 1685746 cri.go:96] found id: ""
	I1222 01:43:24.852196 1685746 logs.go:282] 0 containers: []
	W1222 01:43:24.852206 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:43:24.852212 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:43:24.852277 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:43:24.876880 1685746 cri.go:96] found id: ""
	I1222 01:43:24.876906 1685746 logs.go:282] 0 containers: []
	W1222 01:43:24.876915 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:43:24.876922 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:43:24.876986 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:43:24.902741 1685746 cri.go:96] found id: ""
	I1222 01:43:24.902769 1685746 logs.go:282] 0 containers: []
	W1222 01:43:24.902778 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:43:24.902785 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:43:24.902851 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:43:24.928580 1685746 cri.go:96] found id: ""
	I1222 01:43:24.928603 1685746 logs.go:282] 0 containers: []
	W1222 01:43:24.928612 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:43:24.928618 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:43:24.928686 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:43:24.958505 1685746 cri.go:96] found id: ""
	I1222 01:43:24.958533 1685746 logs.go:282] 0 containers: []
	W1222 01:43:24.958542 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:43:24.958548 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:43:24.958610 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:43:24.988354 1685746 cri.go:96] found id: ""
	I1222 01:43:24.988394 1685746 logs.go:282] 0 containers: []
	W1222 01:43:24.988403 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:43:24.988410 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:43:24.988471 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:43:25.022402 1685746 cri.go:96] found id: ""
	I1222 01:43:25.022445 1685746 logs.go:282] 0 containers: []
	W1222 01:43:25.022455 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:43:25.022465 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:43:25.022477 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:43:25.090031 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:43:25.080604   10424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:25.081363   10424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:25.083352   10424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:25.084129   10424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:25.086042   10424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:43:25.080604   10424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:25.081363   10424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:25.083352   10424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:25.084129   10424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:25.086042   10424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:43:25.090122 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:43:25.090152 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:43:25.117050 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:43:25.117090 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:43:25.146413 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:43:25.146443 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:43:25.203377 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:43:25.203415 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:43:27.718901 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:43:27.729888 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:43:27.729962 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:43:27.753619 1685746 cri.go:96] found id: ""
	I1222 01:43:27.753643 1685746 logs.go:282] 0 containers: []
	W1222 01:43:27.753651 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:43:27.753657 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:43:27.753734 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:43:27.778439 1685746 cri.go:96] found id: ""
	I1222 01:43:27.778468 1685746 logs.go:282] 0 containers: []
	W1222 01:43:27.778477 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:43:27.778484 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:43:27.778549 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:43:27.803747 1685746 cri.go:96] found id: ""
	I1222 01:43:27.803776 1685746 logs.go:282] 0 containers: []
	W1222 01:43:27.803786 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:43:27.803792 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:43:27.803851 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:43:27.833272 1685746 cri.go:96] found id: ""
	I1222 01:43:27.833295 1685746 logs.go:282] 0 containers: []
	W1222 01:43:27.833303 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:43:27.833310 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:43:27.833383 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:43:27.858574 1685746 cri.go:96] found id: ""
	I1222 01:43:27.858602 1685746 logs.go:282] 0 containers: []
	W1222 01:43:27.858613 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:43:27.858619 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:43:27.858680 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:43:27.884333 1685746 cri.go:96] found id: ""
	I1222 01:43:27.884361 1685746 logs.go:282] 0 containers: []
	W1222 01:43:27.884418 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:43:27.884434 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:43:27.884509 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:43:27.914000 1685746 cri.go:96] found id: ""
	I1222 01:43:27.914111 1685746 logs.go:282] 0 containers: []
	W1222 01:43:27.914145 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:43:27.914159 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:43:27.914221 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:43:27.939204 1685746 cri.go:96] found id: ""
	I1222 01:43:27.939228 1685746 logs.go:282] 0 containers: []
	W1222 01:43:27.939237 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:43:27.939246 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:43:27.939257 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:43:27.953702 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:43:27.953728 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:43:28.021111 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:43:28.012570   10545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:28.013161   10545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:28.014852   10545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:28.015378   10545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:28.017163   10545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:43:28.012570   10545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:28.013161   10545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:28.014852   10545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:28.015378   10545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:28.017163   10545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:43:28.021131 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:43:28.021144 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:43:28.048052 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:43:28.048090 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:43:28.080739 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:43:28.080776 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:43:30.641402 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:43:30.652837 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:43:30.652908 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:43:30.679700 1685746 cri.go:96] found id: ""
	I1222 01:43:30.679727 1685746 logs.go:282] 0 containers: []
	W1222 01:43:30.679736 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:43:30.679743 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:43:30.679872 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:43:30.708517 1685746 cri.go:96] found id: ""
	I1222 01:43:30.708545 1685746 logs.go:282] 0 containers: []
	W1222 01:43:30.708554 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:43:30.708561 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:43:30.708622 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:43:30.737801 1685746 cri.go:96] found id: ""
	I1222 01:43:30.737829 1685746 logs.go:282] 0 containers: []
	W1222 01:43:30.737838 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:43:30.737845 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:43:30.737916 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:43:30.764096 1685746 cri.go:96] found id: ""
	I1222 01:43:30.764124 1685746 logs.go:282] 0 containers: []
	W1222 01:43:30.764134 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:43:30.764141 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:43:30.764252 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:43:30.789565 1685746 cri.go:96] found id: ""
	I1222 01:43:30.789591 1685746 logs.go:282] 0 containers: []
	W1222 01:43:30.789599 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:43:30.789607 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:43:30.789684 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:43:30.822764 1685746 cri.go:96] found id: ""
	I1222 01:43:30.822833 1685746 logs.go:282] 0 containers: []
	W1222 01:43:30.822857 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:43:30.822871 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:43:30.822957 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:43:30.848727 1685746 cri.go:96] found id: ""
	I1222 01:43:30.848754 1685746 logs.go:282] 0 containers: []
	W1222 01:43:30.848763 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:43:30.848770 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:43:30.848830 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:43:30.876920 1685746 cri.go:96] found id: ""
	I1222 01:43:30.876945 1685746 logs.go:282] 0 containers: []
	W1222 01:43:30.876954 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:43:30.876963 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:43:30.876974 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:43:30.932977 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:43:30.933015 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:43:30.950177 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:43:30.950205 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:43:31.021720 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:43:31.012613   10658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:31.013393   10658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:31.015106   10658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:31.015679   10658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:31.017436   10658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:43:31.012613   10658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:31.013393   10658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:31.015106   10658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:31.015679   10658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:31.017436   10658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:43:31.021745 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:43:31.021757 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:43:31.047873 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:43:31.047908 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:43:33.582285 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:43:33.593589 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:43:33.593677 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:43:33.619720 1685746 cri.go:96] found id: ""
	I1222 01:43:33.619746 1685746 logs.go:282] 0 containers: []
	W1222 01:43:33.619755 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:43:33.619762 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:43:33.619823 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:43:33.644535 1685746 cri.go:96] found id: ""
	I1222 01:43:33.644558 1685746 logs.go:282] 0 containers: []
	W1222 01:43:33.644567 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:43:33.644573 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:43:33.644636 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:43:33.674069 1685746 cri.go:96] found id: ""
	I1222 01:43:33.674133 1685746 logs.go:282] 0 containers: []
	W1222 01:43:33.674144 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:43:33.674151 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:43:33.674216 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:43:33.700076 1685746 cri.go:96] found id: ""
	I1222 01:43:33.700102 1685746 logs.go:282] 0 containers: []
	W1222 01:43:33.700111 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:43:33.700118 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:43:33.700179 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:43:33.725155 1685746 cri.go:96] found id: ""
	I1222 01:43:33.725182 1685746 logs.go:282] 0 containers: []
	W1222 01:43:33.725192 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:43:33.725199 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:43:33.725259 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:43:33.752045 1685746 cri.go:96] found id: ""
	I1222 01:43:33.752120 1685746 logs.go:282] 0 containers: []
	W1222 01:43:33.752144 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:43:33.752166 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:43:33.752270 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:43:33.776869 1685746 cri.go:96] found id: ""
	I1222 01:43:33.776897 1685746 logs.go:282] 0 containers: []
	W1222 01:43:33.776917 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:43:33.776925 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:43:33.776995 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:43:33.804537 1685746 cri.go:96] found id: ""
	I1222 01:43:33.804559 1685746 logs.go:282] 0 containers: []
	W1222 01:43:33.804568 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:43:33.804577 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:43:33.804589 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:43:33.868017 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:43:33.859678   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:33.860434   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:33.862123   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:33.862611   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:33.864178   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:43:33.859678   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:33.860434   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:33.862123   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:33.862611   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:33.864178   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:43:33.868038 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:43:33.868050 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:43:33.893225 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:43:33.893268 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:43:33.925850 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:43:33.925880 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:43:33.984794 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:43:33.984827 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:43:36.500237 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:43:36.517959 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:43:36.518035 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:43:36.566551 1685746 cri.go:96] found id: ""
	I1222 01:43:36.566578 1685746 logs.go:282] 0 containers: []
	W1222 01:43:36.566587 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:43:36.566594 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:43:36.566675 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:43:36.601952 1685746 cri.go:96] found id: ""
	I1222 01:43:36.601979 1685746 logs.go:282] 0 containers: []
	W1222 01:43:36.601988 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:43:36.601994 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:43:36.602069 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:43:36.628093 1685746 cri.go:96] found id: ""
	I1222 01:43:36.628123 1685746 logs.go:282] 0 containers: []
	W1222 01:43:36.628132 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:43:36.628138 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:43:36.628199 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:43:36.653428 1685746 cri.go:96] found id: ""
	I1222 01:43:36.653457 1685746 logs.go:282] 0 containers: []
	W1222 01:43:36.653471 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:43:36.653478 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:43:36.653536 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:43:36.680092 1685746 cri.go:96] found id: ""
	I1222 01:43:36.680115 1685746 logs.go:282] 0 containers: []
	W1222 01:43:36.680124 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:43:36.680130 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:43:36.680189 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:43:36.706982 1685746 cri.go:96] found id: ""
	I1222 01:43:36.707020 1685746 logs.go:282] 0 containers: []
	W1222 01:43:36.707030 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:43:36.707037 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:43:36.707112 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:43:36.731661 1685746 cri.go:96] found id: ""
	I1222 01:43:36.731738 1685746 logs.go:282] 0 containers: []
	W1222 01:43:36.731760 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:43:36.731783 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:43:36.731878 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:43:36.759936 1685746 cri.go:96] found id: ""
	I1222 01:43:36.759958 1685746 logs.go:282] 0 containers: []
	W1222 01:43:36.759966 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:43:36.759975 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:43:36.759986 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:43:36.774574 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:43:36.774601 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:43:36.840390 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:43:36.831270   10885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:36.832097   10885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:36.833923   10885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:36.834552   10885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:36.836202   10885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:43:36.831270   10885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:36.832097   10885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:36.833923   10885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:36.834552   10885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:36.836202   10885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:43:36.840453 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:43:36.840474 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:43:36.865823 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:43:36.865861 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:43:36.895884 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:43:36.895914 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:43:39.451426 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:43:39.462101 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:43:39.462175 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:43:39.492238 1685746 cri.go:96] found id: ""
	I1222 01:43:39.492261 1685746 logs.go:282] 0 containers: []
	W1222 01:43:39.492270 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:43:39.492281 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:43:39.492355 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:43:39.573214 1685746 cri.go:96] found id: ""
	I1222 01:43:39.573236 1685746 logs.go:282] 0 containers: []
	W1222 01:43:39.573244 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:43:39.573251 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:43:39.573323 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:43:39.599147 1685746 cri.go:96] found id: ""
	I1222 01:43:39.599172 1685746 logs.go:282] 0 containers: []
	W1222 01:43:39.599181 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:43:39.599188 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:43:39.599251 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:43:39.624765 1685746 cri.go:96] found id: ""
	I1222 01:43:39.624850 1685746 logs.go:282] 0 containers: []
	W1222 01:43:39.624874 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:43:39.624915 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:43:39.625014 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:43:39.656217 1685746 cri.go:96] found id: ""
	I1222 01:43:39.656244 1685746 logs.go:282] 0 containers: []
	W1222 01:43:39.656253 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:43:39.656260 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:43:39.656349 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:43:39.682103 1685746 cri.go:96] found id: ""
	I1222 01:43:39.682127 1685746 logs.go:282] 0 containers: []
	W1222 01:43:39.682136 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:43:39.682143 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:43:39.682211 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:43:39.707971 1685746 cri.go:96] found id: ""
	I1222 01:43:39.707999 1685746 logs.go:282] 0 containers: []
	W1222 01:43:39.708008 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:43:39.708015 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:43:39.708075 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:43:39.737148 1685746 cri.go:96] found id: ""
	I1222 01:43:39.737175 1685746 logs.go:282] 0 containers: []
	W1222 01:43:39.737184 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:43:39.737194 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:43:39.737210 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:43:39.805404 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:43:39.797552   10995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:39.798323   10995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:39.799828   10995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:39.800272   10995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:39.801768   10995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:43:39.797552   10995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:39.798323   10995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:39.799828   10995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:39.800272   10995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:39.801768   10995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:43:39.805427 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:43:39.805441 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:43:39.835140 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:43:39.835180 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:43:39.864203 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:43:39.864232 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:43:39.919399 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:43:39.919435 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:43:42.434907 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:43:42.447524 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:43:42.447601 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:43:42.474430 1685746 cri.go:96] found id: ""
	I1222 01:43:42.474452 1685746 logs.go:282] 0 containers: []
	W1222 01:43:42.474468 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:43:42.474475 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:43:42.474534 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:43:42.539132 1685746 cri.go:96] found id: ""
	I1222 01:43:42.539154 1685746 logs.go:282] 0 containers: []
	W1222 01:43:42.539178 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:43:42.539186 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:43:42.539287 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:43:42.575001 1685746 cri.go:96] found id: ""
	I1222 01:43:42.575023 1685746 logs.go:282] 0 containers: []
	W1222 01:43:42.575031 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:43:42.575037 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:43:42.575095 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:43:42.599923 1685746 cri.go:96] found id: ""
	I1222 01:43:42.599947 1685746 logs.go:282] 0 containers: []
	W1222 01:43:42.599956 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:43:42.599963 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:43:42.600027 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:43:42.624602 1685746 cri.go:96] found id: ""
	I1222 01:43:42.624630 1685746 logs.go:282] 0 containers: []
	W1222 01:43:42.624640 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:43:42.624646 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:43:42.624707 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:43:42.649899 1685746 cri.go:96] found id: ""
	I1222 01:43:42.649925 1685746 logs.go:282] 0 containers: []
	W1222 01:43:42.649934 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:43:42.649941 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:43:42.650001 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:43:42.675756 1685746 cri.go:96] found id: ""
	I1222 01:43:42.675836 1685746 logs.go:282] 0 containers: []
	W1222 01:43:42.675860 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:43:42.675897 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:43:42.675973 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:43:42.702958 1685746 cri.go:96] found id: ""
	I1222 01:43:42.702995 1685746 logs.go:282] 0 containers: []
	W1222 01:43:42.703005 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:43:42.703014 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:43:42.703025 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:43:42.759487 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:43:42.759526 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:43:42.774803 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:43:42.774835 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:43:42.841752 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:43:42.833129   11114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:42.833574   11114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:42.835289   11114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:42.835969   11114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:42.837683   11114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:43:42.833129   11114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:42.833574   11114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:42.835289   11114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:42.835969   11114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:42.837683   11114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:43:42.841776 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:43:42.841790 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:43:42.868632 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:43:42.868666 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:43:45.400104 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:43:45.410950 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:43:45.411071 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:43:45.436920 1685746 cri.go:96] found id: ""
	I1222 01:43:45.436957 1685746 logs.go:282] 0 containers: []
	W1222 01:43:45.436966 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:43:45.436973 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:43:45.437044 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:43:45.464719 1685746 cri.go:96] found id: ""
	I1222 01:43:45.464755 1685746 logs.go:282] 0 containers: []
	W1222 01:43:45.464765 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:43:45.464771 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:43:45.464841 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:43:45.501180 1685746 cri.go:96] found id: ""
	I1222 01:43:45.501207 1685746 logs.go:282] 0 containers: []
	W1222 01:43:45.501226 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:43:45.501234 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:43:45.501305 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:43:45.547294 1685746 cri.go:96] found id: ""
	I1222 01:43:45.547339 1685746 logs.go:282] 0 containers: []
	W1222 01:43:45.547350 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:43:45.547357 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:43:45.547435 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:43:45.581484 1685746 cri.go:96] found id: ""
	I1222 01:43:45.581526 1685746 logs.go:282] 0 containers: []
	W1222 01:43:45.581535 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:43:45.581542 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:43:45.581613 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:43:45.610563 1685746 cri.go:96] found id: ""
	I1222 01:43:45.610591 1685746 logs.go:282] 0 containers: []
	W1222 01:43:45.610600 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:43:45.610607 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:43:45.610679 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:43:45.637028 1685746 cri.go:96] found id: ""
	I1222 01:43:45.637054 1685746 logs.go:282] 0 containers: []
	W1222 01:43:45.637064 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:43:45.637070 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:43:45.637141 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:43:45.662660 1685746 cri.go:96] found id: ""
	I1222 01:43:45.662740 1685746 logs.go:282] 0 containers: []
	W1222 01:43:45.662756 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:43:45.662767 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:43:45.662779 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:43:45.719167 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:43:45.719208 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:43:45.734405 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:43:45.734438 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:43:45.802645 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:43:45.794241   11226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:45.795082   11226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:45.796666   11226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:45.797286   11226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:45.798945   11226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:43:45.794241   11226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:45.795082   11226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:45.796666   11226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:45.797286   11226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:45.798945   11226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:43:45.802667 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:43:45.802680 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:43:45.829402 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:43:45.829439 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:43:48.362229 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:43:48.372648 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:43:48.372722 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:43:48.399816 1685746 cri.go:96] found id: ""
	I1222 01:43:48.399843 1685746 logs.go:282] 0 containers: []
	W1222 01:43:48.399852 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:43:48.399859 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:43:48.399922 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:43:48.424774 1685746 cri.go:96] found id: ""
	I1222 01:43:48.424800 1685746 logs.go:282] 0 containers: []
	W1222 01:43:48.424809 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:43:48.424816 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:43:48.424873 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:43:48.449402 1685746 cri.go:96] found id: ""
	I1222 01:43:48.449429 1685746 logs.go:282] 0 containers: []
	W1222 01:43:48.449438 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:43:48.449444 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:43:48.449501 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:43:48.481785 1685746 cri.go:96] found id: ""
	I1222 01:43:48.481811 1685746 logs.go:282] 0 containers: []
	W1222 01:43:48.481822 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:43:48.481828 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:43:48.481884 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:43:48.535392 1685746 cri.go:96] found id: ""
	I1222 01:43:48.535421 1685746 logs.go:282] 0 containers: []
	W1222 01:43:48.535429 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:43:48.535435 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:43:48.535495 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:43:48.581091 1685746 cri.go:96] found id: ""
	I1222 01:43:48.581119 1685746 logs.go:282] 0 containers: []
	W1222 01:43:48.581128 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:43:48.581135 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:43:48.581195 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:43:48.608115 1685746 cri.go:96] found id: ""
	I1222 01:43:48.608143 1685746 logs.go:282] 0 containers: []
	W1222 01:43:48.608152 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:43:48.608158 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:43:48.608222 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:43:48.634982 1685746 cri.go:96] found id: ""
	I1222 01:43:48.635007 1685746 logs.go:282] 0 containers: []
	W1222 01:43:48.635015 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:43:48.635024 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:43:48.635040 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:43:48.690980 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:43:48.691017 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:43:48.706101 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:43:48.706126 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:43:48.773880 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:43:48.764854   11337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:48.765671   11337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:48.767568   11337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:48.768241   11337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:48.769856   11337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:43:48.764854   11337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:48.765671   11337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:48.767568   11337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:48.768241   11337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:48.769856   11337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:43:48.773903 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:43:48.773915 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:43:48.798770 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:43:48.798805 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:43:51.326747 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:43:51.337244 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:43:51.337316 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:43:51.361650 1685746 cri.go:96] found id: ""
	I1222 01:43:51.361674 1685746 logs.go:282] 0 containers: []
	W1222 01:43:51.361685 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:43:51.361691 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:43:51.361752 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:43:51.387243 1685746 cri.go:96] found id: ""
	I1222 01:43:51.387267 1685746 logs.go:282] 0 containers: []
	W1222 01:43:51.387275 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:43:51.387282 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:43:51.387339 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:43:51.412051 1685746 cri.go:96] found id: ""
	I1222 01:43:51.412076 1685746 logs.go:282] 0 containers: []
	W1222 01:43:51.412085 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:43:51.412091 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:43:51.412152 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:43:51.442828 1685746 cri.go:96] found id: ""
	I1222 01:43:51.442855 1685746 logs.go:282] 0 containers: []
	W1222 01:43:51.442864 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:43:51.442871 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:43:51.442931 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:43:51.469084 1685746 cri.go:96] found id: ""
	I1222 01:43:51.469111 1685746 logs.go:282] 0 containers: []
	W1222 01:43:51.469120 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:43:51.469128 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:43:51.469196 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:43:51.505900 1685746 cri.go:96] found id: ""
	I1222 01:43:51.505931 1685746 logs.go:282] 0 containers: []
	W1222 01:43:51.505940 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:43:51.505947 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:43:51.506015 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:43:51.544756 1685746 cri.go:96] found id: ""
	I1222 01:43:51.544794 1685746 logs.go:282] 0 containers: []
	W1222 01:43:51.544803 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:43:51.544810 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:43:51.544881 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:43:51.595192 1685746 cri.go:96] found id: ""
	I1222 01:43:51.595274 1685746 logs.go:282] 0 containers: []
	W1222 01:43:51.595308 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:43:51.595330 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:43:51.595370 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:43:51.651780 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:43:51.651815 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:43:51.666583 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:43:51.666611 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:43:51.736962 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:43:51.728914   11451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:51.729300   11451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:51.730869   11451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:51.731559   11451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:51.732987   11451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:43:51.728914   11451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:51.729300   11451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:51.730869   11451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:51.731559   11451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:51.732987   11451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:43:51.736984 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:43:51.736997 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:43:51.763237 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:43:51.763272 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:43:54.292529 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:43:54.303313 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:43:54.303393 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:43:54.329228 1685746 cri.go:96] found id: ""
	I1222 01:43:54.329251 1685746 logs.go:282] 0 containers: []
	W1222 01:43:54.329260 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:43:54.329266 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:43:54.329325 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:43:54.353443 1685746 cri.go:96] found id: ""
	I1222 01:43:54.353478 1685746 logs.go:282] 0 containers: []
	W1222 01:43:54.353488 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:43:54.353495 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:43:54.353565 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:43:54.385463 1685746 cri.go:96] found id: ""
	I1222 01:43:54.385487 1685746 logs.go:282] 0 containers: []
	W1222 01:43:54.385496 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:43:54.385502 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:43:54.385571 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:43:54.413065 1685746 cri.go:96] found id: ""
	I1222 01:43:54.413135 1685746 logs.go:282] 0 containers: []
	W1222 01:43:54.413160 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:43:54.413209 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:43:54.413290 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:43:54.440350 1685746 cri.go:96] found id: ""
	I1222 01:43:54.440376 1685746 logs.go:282] 0 containers: []
	W1222 01:43:54.440385 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:43:54.440391 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:43:54.440469 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:43:54.469549 1685746 cri.go:96] found id: ""
	I1222 01:43:54.469583 1685746 logs.go:282] 0 containers: []
	W1222 01:43:54.469592 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:43:54.469599 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:43:54.469668 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:43:54.514637 1685746 cri.go:96] found id: ""
	I1222 01:43:54.514714 1685746 logs.go:282] 0 containers: []
	W1222 01:43:54.514738 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:43:54.514761 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:43:54.514876 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:43:54.546685 1685746 cri.go:96] found id: ""
	I1222 01:43:54.546708 1685746 logs.go:282] 0 containers: []
	W1222 01:43:54.546717 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:43:54.546726 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:43:54.546737 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:43:54.576240 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:43:54.576324 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:43:54.618824 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:43:54.618853 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:43:54.673867 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:43:54.673900 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:43:54.689028 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:43:54.689057 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:43:54.755999 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:43:54.747746   11573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:54.748476   11573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:54.750175   11573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:54.750710   11573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:54.752333   11573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:43:54.747746   11573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:54.748476   11573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:54.750175   11573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:54.750710   11573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:54.752333   11573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:43:57.257146 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:43:57.268025 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:43:57.268100 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:43:57.292709 1685746 cri.go:96] found id: ""
	I1222 01:43:57.292738 1685746 logs.go:282] 0 containers: []
	W1222 01:43:57.292748 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:43:57.292761 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:43:57.292826 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:43:57.321159 1685746 cri.go:96] found id: ""
	I1222 01:43:57.321186 1685746 logs.go:282] 0 containers: []
	W1222 01:43:57.321195 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:43:57.321201 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:43:57.321264 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:43:57.350573 1685746 cri.go:96] found id: ""
	I1222 01:43:57.350601 1685746 logs.go:282] 0 containers: []
	W1222 01:43:57.350611 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:43:57.350620 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:43:57.350682 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:43:57.380391 1685746 cri.go:96] found id: ""
	I1222 01:43:57.380425 1685746 logs.go:282] 0 containers: []
	W1222 01:43:57.380435 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:43:57.380441 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:43:57.380502 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:43:57.404977 1685746 cri.go:96] found id: ""
	I1222 01:43:57.405003 1685746 logs.go:282] 0 containers: []
	W1222 01:43:57.405012 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:43:57.405018 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:43:57.405080 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:43:57.431206 1685746 cri.go:96] found id: ""
	I1222 01:43:57.431234 1685746 logs.go:282] 0 containers: []
	W1222 01:43:57.431243 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:43:57.431250 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:43:57.431310 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:43:57.458352 1685746 cri.go:96] found id: ""
	I1222 01:43:57.458378 1685746 logs.go:282] 0 containers: []
	W1222 01:43:57.458387 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:43:57.458393 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:43:57.458454 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:43:57.487672 1685746 cri.go:96] found id: ""
	I1222 01:43:57.487700 1685746 logs.go:282] 0 containers: []
	W1222 01:43:57.487709 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:43:57.487718 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:43:57.487729 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:43:57.523843 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:43:57.523925 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:43:57.589400 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:43:57.589476 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:43:57.650987 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:43:57.651025 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:43:57.666115 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:43:57.666151 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:43:57.735484 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:43:57.726997   11685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:57.727531   11685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:57.729337   11685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:57.729718   11685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:57.731296   11685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:43:57.726997   11685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:57.727531   11685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:57.729337   11685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:57.729718   11685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:43:57.731296   11685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:44:00.237195 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:44:00.303116 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:44:00.303238 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:44:00.349571 1685746 cri.go:96] found id: ""
	I1222 01:44:00.349604 1685746 logs.go:282] 0 containers: []
	W1222 01:44:00.349614 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:44:00.349623 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:44:00.349691 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:44:00.397703 1685746 cri.go:96] found id: ""
	I1222 01:44:00.397728 1685746 logs.go:282] 0 containers: []
	W1222 01:44:00.397757 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:44:00.397772 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:44:00.397869 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:44:00.445846 1685746 cri.go:96] found id: ""
	I1222 01:44:00.445883 1685746 logs.go:282] 0 containers: []
	W1222 01:44:00.445891 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:44:00.445899 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:44:00.445975 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:44:00.481389 1685746 cri.go:96] found id: ""
	I1222 01:44:00.481433 1685746 logs.go:282] 0 containers: []
	W1222 01:44:00.481443 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:44:00.481451 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:44:00.481545 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:44:00.555281 1685746 cri.go:96] found id: ""
	I1222 01:44:00.555323 1685746 logs.go:282] 0 containers: []
	W1222 01:44:00.555333 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:44:00.555339 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:44:00.555417 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:44:00.610523 1685746 cri.go:96] found id: ""
	I1222 01:44:00.610554 1685746 logs.go:282] 0 containers: []
	W1222 01:44:00.610565 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:44:00.610572 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:44:00.610639 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:44:00.640211 1685746 cri.go:96] found id: ""
	I1222 01:44:00.640242 1685746 logs.go:282] 0 containers: []
	W1222 01:44:00.640252 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:44:00.640261 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:44:00.640334 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:44:00.672011 1685746 cri.go:96] found id: ""
	I1222 01:44:00.672037 1685746 logs.go:282] 0 containers: []
	W1222 01:44:00.672046 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:44:00.672055 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:44:00.672067 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:44:00.730908 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:44:00.730946 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:44:00.746205 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:44:00.746280 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:44:00.814946 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:44:00.806290   11785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:00.807084   11785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:00.808801   11785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:00.809407   11785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:00.810977   11785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:44:00.806290   11785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:00.807084   11785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:00.808801   11785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:00.809407   11785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:00.810977   11785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:44:00.814969 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:44:00.814982 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:44:00.841341 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:44:00.841376 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:44:03.372817 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:44:03.383361 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:44:03.383438 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:44:03.407536 1685746 cri.go:96] found id: ""
	I1222 01:44:03.407558 1685746 logs.go:282] 0 containers: []
	W1222 01:44:03.407566 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:44:03.407572 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:44:03.407631 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:44:03.433092 1685746 cri.go:96] found id: ""
	I1222 01:44:03.433120 1685746 logs.go:282] 0 containers: []
	W1222 01:44:03.433129 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:44:03.433135 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:44:03.433193 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:44:03.462721 1685746 cri.go:96] found id: ""
	I1222 01:44:03.462750 1685746 logs.go:282] 0 containers: []
	W1222 01:44:03.462759 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:44:03.462765 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:44:03.462824 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:44:03.512849 1685746 cri.go:96] found id: ""
	I1222 01:44:03.512871 1685746 logs.go:282] 0 containers: []
	W1222 01:44:03.512880 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:44:03.512887 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:44:03.512946 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:44:03.574191 1685746 cri.go:96] found id: ""
	I1222 01:44:03.574217 1685746 logs.go:282] 0 containers: []
	W1222 01:44:03.574226 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:44:03.574232 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:44:03.574299 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:44:03.600756 1685746 cri.go:96] found id: ""
	I1222 01:44:03.600785 1685746 logs.go:282] 0 containers: []
	W1222 01:44:03.600794 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:44:03.600801 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:44:03.600865 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:44:03.627524 1685746 cri.go:96] found id: ""
	I1222 01:44:03.627554 1685746 logs.go:282] 0 containers: []
	W1222 01:44:03.627564 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:44:03.627571 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:44:03.627632 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:44:03.652207 1685746 cri.go:96] found id: ""
	I1222 01:44:03.652230 1685746 logs.go:282] 0 containers: []
	W1222 01:44:03.652239 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:44:03.652248 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:44:03.652258 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:44:03.710392 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:44:03.710427 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:44:03.725850 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:44:03.725877 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:44:03.793641 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:44:03.785409   11901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:03.785944   11901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:03.787476   11901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:03.787975   11901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:03.789432   11901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:44:03.785409   11901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:03.785944   11901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:03.787476   11901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:03.787975   11901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:03.789432   11901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:44:03.793708 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:44:03.793725 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:44:03.819086 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:44:03.819122 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:44:06.350666 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:44:06.361704 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:44:06.361772 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:44:06.387959 1685746 cri.go:96] found id: ""
	I1222 01:44:06.387985 1685746 logs.go:282] 0 containers: []
	W1222 01:44:06.387994 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:44:06.388001 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:44:06.388063 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:44:06.420195 1685746 cri.go:96] found id: ""
	I1222 01:44:06.420229 1685746 logs.go:282] 0 containers: []
	W1222 01:44:06.420239 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:44:06.420245 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:44:06.420318 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:44:06.444201 1685746 cri.go:96] found id: ""
	I1222 01:44:06.444228 1685746 logs.go:282] 0 containers: []
	W1222 01:44:06.444237 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:44:06.444244 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:44:06.444326 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:44:06.469606 1685746 cri.go:96] found id: ""
	I1222 01:44:06.469635 1685746 logs.go:282] 0 containers: []
	W1222 01:44:06.469644 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:44:06.469650 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:44:06.469714 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:44:06.516673 1685746 cri.go:96] found id: ""
	I1222 01:44:06.516703 1685746 logs.go:282] 0 containers: []
	W1222 01:44:06.516712 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:44:06.516719 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:44:06.516783 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:44:06.554976 1685746 cri.go:96] found id: ""
	I1222 01:44:06.555004 1685746 logs.go:282] 0 containers: []
	W1222 01:44:06.555014 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:44:06.555020 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:44:06.555079 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:44:06.587406 1685746 cri.go:96] found id: ""
	I1222 01:44:06.587434 1685746 logs.go:282] 0 containers: []
	W1222 01:44:06.587443 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:44:06.587449 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:44:06.587511 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:44:06.620595 1685746 cri.go:96] found id: ""
	I1222 01:44:06.620623 1685746 logs.go:282] 0 containers: []
	W1222 01:44:06.620633 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:44:06.620642 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:44:06.620655 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:44:06.677532 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:44:06.677567 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:44:06.692910 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:44:06.692987 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:44:06.760398 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:44:06.750827   12014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:06.751480   12014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:06.753963   12014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:06.754997   12014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:06.755795   12014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:44:06.750827   12014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:06.751480   12014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:06.753963   12014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:06.754997   12014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:06.755795   12014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:44:06.760423 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:44:06.760436 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:44:06.785709 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:44:06.785743 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:44:09.314372 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:44:09.325259 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:44:09.325349 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:44:09.350687 1685746 cri.go:96] found id: ""
	I1222 01:44:09.350712 1685746 logs.go:282] 0 containers: []
	W1222 01:44:09.350726 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:44:09.350733 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:44:09.350794 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:44:09.376225 1685746 cri.go:96] found id: ""
	I1222 01:44:09.376252 1685746 logs.go:282] 0 containers: []
	W1222 01:44:09.376260 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:44:09.376267 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:44:09.376332 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:44:09.402898 1685746 cri.go:96] found id: ""
	I1222 01:44:09.402922 1685746 logs.go:282] 0 containers: []
	W1222 01:44:09.402931 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:44:09.402937 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:44:09.403008 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:44:09.428038 1685746 cri.go:96] found id: ""
	I1222 01:44:09.428066 1685746 logs.go:282] 0 containers: []
	W1222 01:44:09.428075 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:44:09.428082 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:44:09.428150 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:44:09.456772 1685746 cri.go:96] found id: ""
	I1222 01:44:09.456798 1685746 logs.go:282] 0 containers: []
	W1222 01:44:09.456806 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:44:09.456813 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:44:09.456871 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:44:09.484926 1685746 cri.go:96] found id: ""
	I1222 01:44:09.484953 1685746 logs.go:282] 0 containers: []
	W1222 01:44:09.484962 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:44:09.484968 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:44:09.485029 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:44:09.521247 1685746 cri.go:96] found id: ""
	I1222 01:44:09.521276 1685746 logs.go:282] 0 containers: []
	W1222 01:44:09.521285 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:44:09.521292 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:44:09.521361 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:44:09.559247 1685746 cri.go:96] found id: ""
	I1222 01:44:09.559283 1685746 logs.go:282] 0 containers: []
	W1222 01:44:09.559292 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:44:09.559301 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:44:09.559313 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:44:09.576452 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:44:09.576488 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:44:09.647498 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:44:09.639570   12128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:09.640313   12128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:09.641853   12128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:09.642396   12128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:09.643920   12128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:44:09.639570   12128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:09.640313   12128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:09.641853   12128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:09.642396   12128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:09.643920   12128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:44:09.647522 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:44:09.647535 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:44:09.672763 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:44:09.672799 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:44:09.703339 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:44:09.703367 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:44:12.258428 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:44:12.269740 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:44:12.269827 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:44:12.295142 1685746 cri.go:96] found id: ""
	I1222 01:44:12.295166 1685746 logs.go:282] 0 containers: []
	W1222 01:44:12.295174 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:44:12.295181 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:44:12.295239 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:44:12.324426 1685746 cri.go:96] found id: ""
	I1222 01:44:12.324453 1685746 logs.go:282] 0 containers: []
	W1222 01:44:12.324462 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:44:12.324468 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:44:12.324528 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:44:12.352908 1685746 cri.go:96] found id: ""
	I1222 01:44:12.352936 1685746 logs.go:282] 0 containers: []
	W1222 01:44:12.352945 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:44:12.352952 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:44:12.353016 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:44:12.382056 1685746 cri.go:96] found id: ""
	I1222 01:44:12.382106 1685746 logs.go:282] 0 containers: []
	W1222 01:44:12.382115 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:44:12.382122 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:44:12.382184 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:44:12.405895 1685746 cri.go:96] found id: ""
	I1222 01:44:12.405926 1685746 logs.go:282] 0 containers: []
	W1222 01:44:12.405935 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:44:12.405941 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:44:12.406063 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:44:12.432020 1685746 cri.go:96] found id: ""
	I1222 01:44:12.432046 1685746 logs.go:282] 0 containers: []
	W1222 01:44:12.432055 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:44:12.432062 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:44:12.432167 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:44:12.460268 1685746 cri.go:96] found id: ""
	I1222 01:44:12.460316 1685746 logs.go:282] 0 containers: []
	W1222 01:44:12.460325 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:44:12.460332 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:44:12.460391 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:44:12.510214 1685746 cri.go:96] found id: ""
	I1222 01:44:12.510243 1685746 logs.go:282] 0 containers: []
	W1222 01:44:12.510252 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:44:12.510261 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:44:12.510281 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:44:12.574866 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:44:12.574895 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:44:12.630459 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:44:12.630495 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:44:12.645639 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:44:12.645667 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:44:12.715658 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:44:12.707654   12252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:12.708218   12252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:12.709705   12252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:12.710254   12252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:12.711749   12252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:44:12.707654   12252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:12.708218   12252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:12.709705   12252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:12.710254   12252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:12.711749   12252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:44:12.715678 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:44:12.715691 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:44:15.242028 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:44:15.253031 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:44:15.253105 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:44:15.283751 1685746 cri.go:96] found id: ""
	I1222 01:44:15.283784 1685746 logs.go:282] 0 containers: []
	W1222 01:44:15.283794 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:44:15.283800 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:44:15.283865 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:44:15.308803 1685746 cri.go:96] found id: ""
	I1222 01:44:15.308830 1685746 logs.go:282] 0 containers: []
	W1222 01:44:15.308840 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:44:15.308846 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:44:15.308911 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:44:15.334334 1685746 cri.go:96] found id: ""
	I1222 01:44:15.334362 1685746 logs.go:282] 0 containers: []
	W1222 01:44:15.334371 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:44:15.334378 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:44:15.334437 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:44:15.363819 1685746 cri.go:96] found id: ""
	I1222 01:44:15.363843 1685746 logs.go:282] 0 containers: []
	W1222 01:44:15.363852 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:44:15.363859 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:44:15.363920 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:44:15.389166 1685746 cri.go:96] found id: ""
	I1222 01:44:15.389194 1685746 logs.go:282] 0 containers: []
	W1222 01:44:15.389203 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:44:15.389211 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:44:15.389275 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:44:15.418948 1685746 cri.go:96] found id: ""
	I1222 01:44:15.419022 1685746 logs.go:282] 0 containers: []
	W1222 01:44:15.419035 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:44:15.419042 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:44:15.419135 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:44:15.446013 1685746 cri.go:96] found id: ""
	I1222 01:44:15.446105 1685746 logs.go:282] 0 containers: []
	W1222 01:44:15.446130 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:44:15.446162 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:44:15.446236 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:44:15.470779 1685746 cri.go:96] found id: ""
	I1222 01:44:15.470806 1685746 logs.go:282] 0 containers: []
	W1222 01:44:15.470815 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:44:15.470825 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:44:15.470857 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:44:15.551154 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:44:15.551246 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:44:15.578834 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:44:15.578861 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:44:15.644949 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:44:15.637144   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:15.637770   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:15.639327   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:15.639800   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:15.641301   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:44:15.637144   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:15.637770   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:15.639327   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:15.639800   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:15.641301   12353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:44:15.644969 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:44:15.644981 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:44:15.670551 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:44:15.670585 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:44:18.202679 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:44:18.213735 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:44:18.213812 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:44:18.239304 1685746 cri.go:96] found id: ""
	I1222 01:44:18.239327 1685746 logs.go:282] 0 containers: []
	W1222 01:44:18.239336 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:44:18.239342 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:44:18.239401 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:44:18.265064 1685746 cri.go:96] found id: ""
	I1222 01:44:18.265089 1685746 logs.go:282] 0 containers: []
	W1222 01:44:18.265098 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:44:18.265104 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:44:18.265165 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:44:18.290606 1685746 cri.go:96] found id: ""
	I1222 01:44:18.290642 1685746 logs.go:282] 0 containers: []
	W1222 01:44:18.290652 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:44:18.290659 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:44:18.290734 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:44:18.317208 1685746 cri.go:96] found id: ""
	I1222 01:44:18.317231 1685746 logs.go:282] 0 containers: []
	W1222 01:44:18.317240 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:44:18.317246 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:44:18.317306 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:44:18.342186 1685746 cri.go:96] found id: ""
	I1222 01:44:18.342207 1685746 logs.go:282] 0 containers: []
	W1222 01:44:18.342216 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:44:18.342222 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:44:18.342280 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:44:18.367436 1685746 cri.go:96] found id: ""
	I1222 01:44:18.367468 1685746 logs.go:282] 0 containers: []
	W1222 01:44:18.367477 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:44:18.367484 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:44:18.367572 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:44:18.392591 1685746 cri.go:96] found id: ""
	I1222 01:44:18.392616 1685746 logs.go:282] 0 containers: []
	W1222 01:44:18.392625 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:44:18.392632 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:44:18.392691 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:44:18.417782 1685746 cri.go:96] found id: ""
	I1222 01:44:18.417820 1685746 logs.go:282] 0 containers: []
	W1222 01:44:18.417829 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:44:18.417838 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:44:18.417850 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:44:18.475370 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:44:18.475402 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:44:18.496693 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:44:18.496722 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:44:18.602667 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:44:18.591806   12468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:18.592517   12468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:18.594327   12468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:18.595360   12468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:18.595864   12468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:44:18.591806   12468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:18.592517   12468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:18.594327   12468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:18.595360   12468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:18.595864   12468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:44:18.602690 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:44:18.602704 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:44:18.628074 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:44:18.628158 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:44:21.160991 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:44:21.171843 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:44:21.171925 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:44:21.197008 1685746 cri.go:96] found id: ""
	I1222 01:44:21.197035 1685746 logs.go:282] 0 containers: []
	W1222 01:44:21.197045 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:44:21.197051 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:44:21.197111 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:44:21.222701 1685746 cri.go:96] found id: ""
	I1222 01:44:21.222731 1685746 logs.go:282] 0 containers: []
	W1222 01:44:21.222740 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:44:21.222747 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:44:21.222812 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:44:21.247835 1685746 cri.go:96] found id: ""
	I1222 01:44:21.247858 1685746 logs.go:282] 0 containers: []
	W1222 01:44:21.247867 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:44:21.247874 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:44:21.247932 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:44:21.272366 1685746 cri.go:96] found id: ""
	I1222 01:44:21.272400 1685746 logs.go:282] 0 containers: []
	W1222 01:44:21.272411 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:44:21.272418 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:44:21.272483 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:44:21.297348 1685746 cri.go:96] found id: ""
	I1222 01:44:21.297375 1685746 logs.go:282] 0 containers: []
	W1222 01:44:21.297384 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:44:21.297391 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:44:21.297449 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:44:21.321989 1685746 cri.go:96] found id: ""
	I1222 01:44:21.322013 1685746 logs.go:282] 0 containers: []
	W1222 01:44:21.322022 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:44:21.322029 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:44:21.322112 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:44:21.350652 1685746 cri.go:96] found id: ""
	I1222 01:44:21.350677 1685746 logs.go:282] 0 containers: []
	W1222 01:44:21.350685 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:44:21.350691 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:44:21.350754 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:44:21.382678 1685746 cri.go:96] found id: ""
	I1222 01:44:21.382748 1685746 logs.go:282] 0 containers: []
	W1222 01:44:21.382773 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:44:21.382791 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:44:21.382804 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:44:21.438683 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:44:21.438718 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:44:21.453712 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:44:21.453745 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:44:21.571593 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:44:21.558586   12575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:21.559284   12575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:21.561942   12575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:21.562734   12575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:21.565012   12575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:44:21.558586   12575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:21.559284   12575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:21.561942   12575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:21.562734   12575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:21.565012   12575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:44:21.571621 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:44:21.571635 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:44:21.598254 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:44:21.598290 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:44:24.133046 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:44:24.144639 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:44:24.144716 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:44:24.170797 1685746 cri.go:96] found id: ""
	I1222 01:44:24.170821 1685746 logs.go:282] 0 containers: []
	W1222 01:44:24.170830 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:44:24.170838 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:44:24.170901 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:44:24.198790 1685746 cri.go:96] found id: ""
	I1222 01:44:24.198813 1685746 logs.go:282] 0 containers: []
	W1222 01:44:24.198822 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:44:24.198830 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:44:24.198892 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:44:24.223222 1685746 cri.go:96] found id: ""
	I1222 01:44:24.223245 1685746 logs.go:282] 0 containers: []
	W1222 01:44:24.223253 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:44:24.223259 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:44:24.223317 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:44:24.248490 1685746 cri.go:96] found id: ""
	I1222 01:44:24.248573 1685746 logs.go:282] 0 containers: []
	W1222 01:44:24.248590 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:44:24.248598 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:44:24.248678 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:44:24.273541 1685746 cri.go:96] found id: ""
	I1222 01:44:24.273570 1685746 logs.go:282] 0 containers: []
	W1222 01:44:24.273578 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:44:24.273585 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:44:24.273647 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:44:24.298819 1685746 cri.go:96] found id: ""
	I1222 01:44:24.298847 1685746 logs.go:282] 0 containers: []
	W1222 01:44:24.298856 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:44:24.298863 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:44:24.298921 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:44:24.324215 1685746 cri.go:96] found id: ""
	I1222 01:44:24.324316 1685746 logs.go:282] 0 containers: []
	W1222 01:44:24.324334 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:44:24.324341 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:44:24.324420 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:44:24.349700 1685746 cri.go:96] found id: ""
	I1222 01:44:24.349727 1685746 logs.go:282] 0 containers: []
	W1222 01:44:24.349736 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:44:24.349745 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:44:24.349756 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:44:24.405384 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:44:24.405419 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:44:24.420496 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:44:24.420524 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:44:24.481353 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:44:24.473051   12686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:24.473892   12686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:24.475622   12686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:24.475956   12686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:24.477524   12686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:44:24.473051   12686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:24.473892   12686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:24.475622   12686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:24.475956   12686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:24.477524   12686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:44:24.481378 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:44:24.481392 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:44:24.507731 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:44:24.508076 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:44:27.051455 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:44:27.062328 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:44:27.062402 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:44:27.088764 1685746 cri.go:96] found id: ""
	I1222 01:44:27.088786 1685746 logs.go:282] 0 containers: []
	W1222 01:44:27.088795 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:44:27.088801 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:44:27.088859 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:44:27.113929 1685746 cri.go:96] found id: ""
	I1222 01:44:27.113951 1685746 logs.go:282] 0 containers: []
	W1222 01:44:27.113959 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:44:27.113966 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:44:27.114027 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:44:27.139537 1685746 cri.go:96] found id: ""
	I1222 01:44:27.139562 1685746 logs.go:282] 0 containers: []
	W1222 01:44:27.139577 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:44:27.139584 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:44:27.139645 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:44:27.164769 1685746 cri.go:96] found id: ""
	I1222 01:44:27.164792 1685746 logs.go:282] 0 containers: []
	W1222 01:44:27.164800 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:44:27.164807 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:44:27.164867 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:44:27.190396 1685746 cri.go:96] found id: ""
	I1222 01:44:27.190424 1685746 logs.go:282] 0 containers: []
	W1222 01:44:27.190433 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:44:27.190440 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:44:27.190503 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:44:27.215574 1685746 cri.go:96] found id: ""
	I1222 01:44:27.215599 1685746 logs.go:282] 0 containers: []
	W1222 01:44:27.215608 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:44:27.215616 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:44:27.215684 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:44:27.246139 1685746 cri.go:96] found id: ""
	I1222 01:44:27.246162 1685746 logs.go:282] 0 containers: []
	W1222 01:44:27.246172 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:44:27.246178 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:44:27.246239 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:44:27.272153 1685746 cri.go:96] found id: ""
	I1222 01:44:27.272177 1685746 logs.go:282] 0 containers: []
	W1222 01:44:27.272185 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:44:27.272193 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:44:27.272205 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:44:27.303523 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:44:27.303552 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:44:27.363938 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:44:27.363985 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:44:27.380130 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:44:27.380163 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:44:27.443113 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:44:27.434806   12809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:27.435316   12809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:27.437169   12809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:27.437629   12809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:27.439083   12809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:44:27.434806   12809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:27.435316   12809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:27.437169   12809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:27.437629   12809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:27.439083   12809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:44:27.443137 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:44:27.443149 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:44:29.969751 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:44:29.980564 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:44:29.980638 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:44:30.027489 1685746 cri.go:96] found id: ""
	I1222 01:44:30.027515 1685746 logs.go:282] 0 containers: []
	W1222 01:44:30.027524 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:44:30.027532 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:44:30.027604 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:44:30.063116 1685746 cri.go:96] found id: ""
	I1222 01:44:30.063142 1685746 logs.go:282] 0 containers: []
	W1222 01:44:30.063152 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:44:30.063160 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:44:30.063229 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:44:30.111428 1685746 cri.go:96] found id: ""
	I1222 01:44:30.111455 1685746 logs.go:282] 0 containers: []
	W1222 01:44:30.111466 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:44:30.111473 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:44:30.111543 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:44:30.142346 1685746 cri.go:96] found id: ""
	I1222 01:44:30.142381 1685746 logs.go:282] 0 containers: []
	W1222 01:44:30.142391 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:44:30.142406 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:44:30.142499 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:44:30.171044 1685746 cri.go:96] found id: ""
	I1222 01:44:30.171068 1685746 logs.go:282] 0 containers: []
	W1222 01:44:30.171078 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:44:30.171084 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:44:30.171150 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:44:30.206010 1685746 cri.go:96] found id: ""
	I1222 01:44:30.206034 1685746 logs.go:282] 0 containers: []
	W1222 01:44:30.206044 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:44:30.206051 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:44:30.206225 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:44:30.235230 1685746 cri.go:96] found id: ""
	I1222 01:44:30.235255 1685746 logs.go:282] 0 containers: []
	W1222 01:44:30.235264 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:44:30.235272 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:44:30.235404 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:44:30.262624 1685746 cri.go:96] found id: ""
	I1222 01:44:30.262651 1685746 logs.go:282] 0 containers: []
	W1222 01:44:30.262661 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:44:30.262671 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:44:30.262689 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:44:30.320010 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:44:30.320048 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:44:30.336273 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:44:30.336303 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:44:30.407334 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:44:30.398947   12913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:30.399729   12913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:30.401509   12913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:30.402016   12913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:30.403552   12913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:44:30.398947   12913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:30.399729   12913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:30.401509   12913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:30.402016   12913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:30.403552   12913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:44:30.407358 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:44:30.407373 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:44:30.432976 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:44:30.433010 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:44:32.965996 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:44:32.976893 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:44:32.976972 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:44:33.004108 1685746 cri.go:96] found id: ""
	I1222 01:44:33.004138 1685746 logs.go:282] 0 containers: []
	W1222 01:44:33.004149 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:44:33.004157 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:44:33.004293 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:44:33.032305 1685746 cri.go:96] found id: ""
	I1222 01:44:33.032333 1685746 logs.go:282] 0 containers: []
	W1222 01:44:33.032343 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:44:33.032350 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:44:33.032410 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:44:33.060572 1685746 cri.go:96] found id: ""
	I1222 01:44:33.060600 1685746 logs.go:282] 0 containers: []
	W1222 01:44:33.060610 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:44:33.060616 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:44:33.060680 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:44:33.086067 1685746 cri.go:96] found id: ""
	I1222 01:44:33.086112 1685746 logs.go:282] 0 containers: []
	W1222 01:44:33.086122 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:44:33.086129 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:44:33.086188 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:44:33.112283 1685746 cri.go:96] found id: ""
	I1222 01:44:33.112310 1685746 logs.go:282] 0 containers: []
	W1222 01:44:33.112320 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:44:33.112326 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:44:33.112390 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:44:33.143337 1685746 cri.go:96] found id: ""
	I1222 01:44:33.143363 1685746 logs.go:282] 0 containers: []
	W1222 01:44:33.143372 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:44:33.143379 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:44:33.143441 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:44:33.169224 1685746 cri.go:96] found id: ""
	I1222 01:44:33.169250 1685746 logs.go:282] 0 containers: []
	W1222 01:44:33.169259 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:44:33.169267 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:44:33.169327 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:44:33.198401 1685746 cri.go:96] found id: ""
	I1222 01:44:33.198422 1685746 logs.go:282] 0 containers: []
	W1222 01:44:33.198431 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:44:33.198440 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:44:33.198451 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:44:33.256328 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:44:33.256364 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:44:33.271899 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:44:33.271930 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:44:33.338753 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:44:33.330214   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:33.330943   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:33.332678   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:33.333280   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:33.335093   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:44:33.330214   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:33.330943   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:33.332678   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:33.333280   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:33.335093   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:44:33.338786 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:44:33.338800 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:44:33.364007 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:44:33.364042 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:44:35.895269 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:44:35.906191 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1222 01:44:35.906266 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 01:44:35.931271 1685746 cri.go:96] found id: ""
	I1222 01:44:35.931297 1685746 logs.go:282] 0 containers: []
	W1222 01:44:35.931306 1685746 logs.go:284] No container was found matching "kube-apiserver"
	I1222 01:44:35.931313 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1222 01:44:35.931372 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 01:44:35.958259 1685746 cri.go:96] found id: ""
	I1222 01:44:35.958289 1685746 logs.go:282] 0 containers: []
	W1222 01:44:35.958298 1685746 logs.go:284] No container was found matching "etcd"
	I1222 01:44:35.958312 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1222 01:44:35.958414 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 01:44:35.982836 1685746 cri.go:96] found id: ""
	I1222 01:44:35.982861 1685746 logs.go:282] 0 containers: []
	W1222 01:44:35.982871 1685746 logs.go:284] No container was found matching "coredns"
	I1222 01:44:35.982877 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1222 01:44:35.982937 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 01:44:36.012610 1685746 cri.go:96] found id: ""
	I1222 01:44:36.012642 1685746 logs.go:282] 0 containers: []
	W1222 01:44:36.012652 1685746 logs.go:284] No container was found matching "kube-scheduler"
	I1222 01:44:36.012659 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1222 01:44:36.012739 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 01:44:36.039888 1685746 cri.go:96] found id: ""
	I1222 01:44:36.039914 1685746 logs.go:282] 0 containers: []
	W1222 01:44:36.039924 1685746 logs.go:284] No container was found matching "kube-proxy"
	I1222 01:44:36.039933 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 01:44:36.039995 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 01:44:36.070115 1685746 cri.go:96] found id: ""
	I1222 01:44:36.070144 1685746 logs.go:282] 0 containers: []
	W1222 01:44:36.070153 1685746 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 01:44:36.070160 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1222 01:44:36.070220 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 01:44:36.095790 1685746 cri.go:96] found id: ""
	I1222 01:44:36.095871 1685746 logs.go:282] 0 containers: []
	W1222 01:44:36.095887 1685746 logs.go:284] No container was found matching "kindnet"
	I1222 01:44:36.095896 1685746 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1222 01:44:36.095967 1685746 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kubernetes-dashboard
	I1222 01:44:36.122442 1685746 cri.go:96] found id: ""
	I1222 01:44:36.122519 1685746 logs.go:282] 0 containers: []
	W1222 01:44:36.122531 1685746 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1222 01:44:36.122570 1685746 logs.go:123] Gathering logs for container status ...
	I1222 01:44:36.122585 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 01:44:36.151370 1685746 logs.go:123] Gathering logs for kubelet ...
	I1222 01:44:36.151396 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 01:44:36.206896 1685746 logs.go:123] Gathering logs for dmesg ...
	I1222 01:44:36.206937 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 01:44:36.222382 1685746 logs.go:123] Gathering logs for describe nodes ...
	I1222 01:44:36.222413 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 01:44:36.290888 1685746 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:44:36.282237   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:36.282881   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:36.284438   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:36.284963   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:36.286512   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 01:44:36.282237   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:36.282881   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:36.284438   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:36.284963   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:36.286512   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 01:44:36.290912 1685746 logs.go:123] Gathering logs for containerd ...
	I1222 01:44:36.290927 1685746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1222 01:44:38.822770 1685746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:44:38.837195 1685746 out.go:203] 
	W1222 01:44:38.840003 1685746 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1222 01:44:38.840044 1685746 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1222 01:44:38.840057 1685746 out.go:285] * Related issues:
	W1222 01:44:38.840077 1685746 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1222 01:44:38.840096 1685746 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1222 01:44:38.842944 1685746 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 22 01:38:37 newest-cni-869293 containerd[555]: time="2025-12-22T01:38:37.471912430Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 22 01:38:37 newest-cni-869293 containerd[555]: time="2025-12-22T01:38:37.471984422Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 22 01:38:37 newest-cni-869293 containerd[555]: time="2025-12-22T01:38:37.472090343Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 22 01:38:37 newest-cni-869293 containerd[555]: time="2025-12-22T01:38:37.472165782Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 22 01:38:37 newest-cni-869293 containerd[555]: time="2025-12-22T01:38:37.472253611Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 22 01:38:37 newest-cni-869293 containerd[555]: time="2025-12-22T01:38:37.472320967Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 22 01:38:37 newest-cni-869293 containerd[555]: time="2025-12-22T01:38:37.472396340Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 22 01:38:37 newest-cni-869293 containerd[555]: time="2025-12-22T01:38:37.472469810Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 22 01:38:37 newest-cni-869293 containerd[555]: time="2025-12-22T01:38:37.472535599Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 22 01:38:37 newest-cni-869293 containerd[555]: time="2025-12-22T01:38:37.472630435Z" level=info msg="Connect containerd service"
	Dec 22 01:38:37 newest-cni-869293 containerd[555]: time="2025-12-22T01:38:37.472973486Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 22 01:38:37 newest-cni-869293 containerd[555]: time="2025-12-22T01:38:37.473627974Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 22 01:38:37 newest-cni-869293 containerd[555]: time="2025-12-22T01:38:37.488429715Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 22 01:38:37 newest-cni-869293 containerd[555]: time="2025-12-22T01:38:37.488493453Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 22 01:38:37 newest-cni-869293 containerd[555]: time="2025-12-22T01:38:37.488524985Z" level=info msg="Start subscribing containerd event"
	Dec 22 01:38:37 newest-cni-869293 containerd[555]: time="2025-12-22T01:38:37.488575940Z" level=info msg="Start recovering state"
	Dec 22 01:38:37 newest-cni-869293 containerd[555]: time="2025-12-22T01:38:37.527785198Z" level=info msg="Start event monitor"
	Dec 22 01:38:37 newest-cni-869293 containerd[555]: time="2025-12-22T01:38:37.527839713Z" level=info msg="Start cni network conf syncer for default"
	Dec 22 01:38:37 newest-cni-869293 containerd[555]: time="2025-12-22T01:38:37.527850864Z" level=info msg="Start streaming server"
	Dec 22 01:38:37 newest-cni-869293 containerd[555]: time="2025-12-22T01:38:37.527863213Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 22 01:38:37 newest-cni-869293 containerd[555]: time="2025-12-22T01:38:37.527915176Z" level=info msg="runtime interface starting up..."
	Dec 22 01:38:37 newest-cni-869293 containerd[555]: time="2025-12-22T01:38:37.527922561Z" level=info msg="starting plugins..."
	Dec 22 01:38:37 newest-cni-869293 containerd[555]: time="2025-12-22T01:38:37.527953700Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 22 01:38:37 newest-cni-869293 containerd[555]: time="2025-12-22T01:38:37.528090677Z" level=info msg="containerd successfully booted in 0.081452s"
	Dec 22 01:38:37 newest-cni-869293 systemd[1]: Started containerd.service - containerd container runtime.
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:44:51.718160   13806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:51.718755   13806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:51.720300   13806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:51.720857   13806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:44:51.722512   13806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec22 00:03] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 01:44:51 up 1 day,  8:27,  0 user,  load average: 1.74, 1.04, 1.42
	Linux newest-cni-869293 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 22 01:44:48 newest-cni-869293 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 01:44:49 newest-cni-869293 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5.
	Dec 22 01:44:49 newest-cni-869293 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:44:49 newest-cni-869293 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:44:49 newest-cni-869293 kubelet[13673]: E1222 01:44:49.557858   13673 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 01:44:49 newest-cni-869293 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 01:44:49 newest-cni-869293 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 01:44:50 newest-cni-869293 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6.
	Dec 22 01:44:50 newest-cni-869293 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:44:50 newest-cni-869293 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:44:50 newest-cni-869293 kubelet[13709]: E1222 01:44:50.337430   13709 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 01:44:50 newest-cni-869293 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 01:44:50 newest-cni-869293 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 01:44:50 newest-cni-869293 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7.
	Dec 22 01:44:50 newest-cni-869293 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:44:50 newest-cni-869293 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:44:51 newest-cni-869293 kubelet[13715]: E1222 01:44:51.107099   13715 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 01:44:51 newest-cni-869293 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 01:44:51 newest-cni-869293 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 01:44:51 newest-cni-869293 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8.
	Dec 22 01:44:51 newest-cni-869293 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:44:51 newest-cni-869293 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:44:51 newest-cni-869293 kubelet[13810]: E1222 01:44:51.803687   13810 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 01:44:51 newest-cni-869293 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 01:44:51 newest-cni-869293 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-869293 -n newest-cni-869293
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-869293 -n newest-cni-869293: exit status 2 (381.831497ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "newest-cni-869293" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (9.55s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (267.57s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1222 01:51:39.269299 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/auto-892179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
I1222 01:52:17.456151 1396864 config.go:182] Loaded profile config "kindnet-892179": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1222 01:52:19.808323 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:52:20.229798 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/auto-892179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1222 01:52:22.714133 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/flannel-892179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:52:22.719448 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/flannel-892179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:52:22.729790 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/flannel-892179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:52:22.750203 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/flannel-892179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:52:22.790554 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/flannel-892179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:52:22.871076 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/flannel-892179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:52:23.031491 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/flannel-892179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1222 01:52:23.352245 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/flannel-892179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:52:23.992711 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/flannel-892179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1222 01:52:25.273567 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/flannel-892179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1222 01:52:27.833741 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/flannel-892179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1222 01:52:32.954867 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/flannel-892179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1222 01:52:43.195863 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/flannel-892179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1222 01:53:03.676228 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/flannel-892179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1222 01:53:07.826383 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-722318/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1222 01:53:42.155346 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/auto-892179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1222 01:53:44.637452 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/flannel-892179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1222 01:54:05.524366 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/calico-892179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:54:05.529720 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/calico-892179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:54:05.540106 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/calico-892179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:54:05.560387 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/calico-892179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:54:05.600754 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/calico-892179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:54:05.681117 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/calico-892179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:54:05.841604 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/calico-892179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:54:06.162178 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/calico-892179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1222 01:54:06.802566 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/calico-892179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1222 01:54:08.083388 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/calico-892179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1222 01:54:10.643780 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/calico-892179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1222 01:54:13.742707 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/old-k8s-version-433815/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1222 01:54:15.764601 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/calico-892179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1222 01:54:26.004796 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/calico-892179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1222 01:54:26.767193 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/default-k8s-diff-port-778490/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1222 01:54:46.485218 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/calico-892179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1222 01:55:06.558585 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/flannel-892179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1222 01:55:27.446259 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/calico-892179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1222 01:55:44.337303 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/custom-flannel-892179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1222 01:55:44.343236 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/custom-flannel-892179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:55:44.355208 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/custom-flannel-892179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:55:44.375783 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/custom-flannel-892179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:55:44.416068 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/custom-flannel-892179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:55:44.496394 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/custom-flannel-892179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:55:44.656858 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/custom-flannel-892179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:55:44.977053 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/custom-flannel-892179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1222 01:55:45.617882 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/custom-flannel-892179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1222 01:55:46.898978 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/custom-flannel-892179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1222 01:55:49.460141 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/custom-flannel-892179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1222 01:55:54.580394 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/custom-flannel-892179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1222 01:55:56.756586 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1222 01:55:58.307881 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/auto-892179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
start_stop_delete_test.go:285: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-154186 -n no-preload-154186
start_stop_delete_test.go:285: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-154186 -n no-preload-154186: exit status 2 (299.341642ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:285: status error: exit status 2 (may be ok)
start_stop_delete_test.go:285: "no-preload-154186" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-154186 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context no-preload-154186 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.453µs)
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-154186 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-154186
helpers_test.go:244: (dbg) docker inspect no-preload-154186:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "66e4ffc32ac4d9a593d1b758510a0770f54cbfe4fc8bd1c7d5d5625196ae8de7",
	        "Created": "2025-12-22T01:26:03.981102368Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1681449,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-22T01:36:23.503691575Z",
	            "FinishedAt": "2025-12-22T01:36:22.112804811Z"
	        },
	        "Image": "sha256:065a636b8735485f57df2b02ed6532902f189a9c5dc304ae0ae68a778e1c9b2c",
	        "ResolvConfPath": "/var/lib/docker/containers/66e4ffc32ac4d9a593d1b758510a0770f54cbfe4fc8bd1c7d5d5625196ae8de7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/66e4ffc32ac4d9a593d1b758510a0770f54cbfe4fc8bd1c7d5d5625196ae8de7/hostname",
	        "HostsPath": "/var/lib/docker/containers/66e4ffc32ac4d9a593d1b758510a0770f54cbfe4fc8bd1c7d5d5625196ae8de7/hosts",
	        "LogPath": "/var/lib/docker/containers/66e4ffc32ac4d9a593d1b758510a0770f54cbfe4fc8bd1c7d5d5625196ae8de7/66e4ffc32ac4d9a593d1b758510a0770f54cbfe4fc8bd1c7d5d5625196ae8de7-json.log",
	        "Name": "/no-preload-154186",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-154186:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-154186",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "66e4ffc32ac4d9a593d1b758510a0770f54cbfe4fc8bd1c7d5d5625196ae8de7",
	                "LowerDir": "/var/lib/docker/overlay2/c8373d16a8e2d6ece6c54ed2fe561f951d426d306116bd1c2373e51241872490-init/diff:/var/lib/docker/overlay2/fdfc0dccbb3b6766b7981a5669907f62e120bbe774767ccbee34e6115374625f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c8373d16a8e2d6ece6c54ed2fe561f951d426d306116bd1c2373e51241872490/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c8373d16a8e2d6ece6c54ed2fe561f951d426d306116bd1c2373e51241872490/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c8373d16a8e2d6ece6c54ed2fe561f951d426d306116bd1c2373e51241872490/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-154186",
	                "Source": "/var/lib/docker/volumes/no-preload-154186/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-154186",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-154186",
	                "name.minikube.sigs.k8s.io": "no-preload-154186",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "79de2efae0fd51e067446e17772315f189c10d5767e33af4ebd104752f65737c",
	            "SandboxKey": "/var/run/docker/netns/79de2efae0fd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38702"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38703"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38706"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38704"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38705"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-154186": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "be:c4:4d:4a:c4:7b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "16496b4bf71c897a797bd98394f7b6821dd2bd1d65c81e9f1efd0fcd1f14d1a5",
	                    "EndpointID": "47af66f9da650982ed99a47d4f083adda357be5441350f59f6280b70b837f98e",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-154186",
	                        "66e4ffc32ac4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-154186 -n no-preload-154186
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-154186 -n no-preload-154186: exit status 2 (333.412662ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-154186 logs -n 25
helpers_test.go:261: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                            ARGS                                            │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p enable-default-cni-892179 sudo iptables -t nat -L -n -v                                 │ enable-default-cni-892179 │ jenkins │ v1.37.0 │ 22 Dec 25 01:55 UTC │ 22 Dec 25 01:55 UTC │
	│ ssh     │ -p enable-default-cni-892179 sudo systemctl status kubelet --all --full --no-pager         │ enable-default-cni-892179 │ jenkins │ v1.37.0 │ 22 Dec 25 01:55 UTC │ 22 Dec 25 01:55 UTC │
	│ ssh     │ -p enable-default-cni-892179 sudo systemctl cat kubelet --no-pager                         │ enable-default-cni-892179 │ jenkins │ v1.37.0 │ 22 Dec 25 01:55 UTC │ 22 Dec 25 01:55 UTC │
	│ ssh     │ -p enable-default-cni-892179 sudo journalctl -xeu kubelet --all --full --no-pager          │ enable-default-cni-892179 │ jenkins │ v1.37.0 │ 22 Dec 25 01:55 UTC │ 22 Dec 25 01:55 UTC │
	│ ssh     │ -p enable-default-cni-892179 sudo cat /etc/kubernetes/kubelet.conf                         │ enable-default-cni-892179 │ jenkins │ v1.37.0 │ 22 Dec 25 01:55 UTC │ 22 Dec 25 01:55 UTC │
	│ ssh     │ -p enable-default-cni-892179 sudo cat /var/lib/kubelet/config.yaml                         │ enable-default-cni-892179 │ jenkins │ v1.37.0 │ 22 Dec 25 01:55 UTC │ 22 Dec 25 01:55 UTC │
	│ ssh     │ -p enable-default-cni-892179 sudo systemctl status docker --all --full --no-pager          │ enable-default-cni-892179 │ jenkins │ v1.37.0 │ 22 Dec 25 01:55 UTC │                     │
	│ ssh     │ -p enable-default-cni-892179 sudo systemctl cat docker --no-pager                          │ enable-default-cni-892179 │ jenkins │ v1.37.0 │ 22 Dec 25 01:55 UTC │ 22 Dec 25 01:55 UTC │
	│ ssh     │ -p enable-default-cni-892179 sudo cat /etc/docker/daemon.json                              │ enable-default-cni-892179 │ jenkins │ v1.37.0 │ 22 Dec 25 01:55 UTC │                     │
	│ ssh     │ -p enable-default-cni-892179 sudo docker system info                                       │ enable-default-cni-892179 │ jenkins │ v1.37.0 │ 22 Dec 25 01:55 UTC │                     │
	│ ssh     │ -p enable-default-cni-892179 sudo systemctl status cri-docker --all --full --no-pager      │ enable-default-cni-892179 │ jenkins │ v1.37.0 │ 22 Dec 25 01:55 UTC │                     │
	│ ssh     │ -p enable-default-cni-892179 sudo systemctl cat cri-docker --no-pager                      │ enable-default-cni-892179 │ jenkins │ v1.37.0 │ 22 Dec 25 01:55 UTC │ 22 Dec 25 01:55 UTC │
	│ ssh     │ -p enable-default-cni-892179 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf │ enable-default-cni-892179 │ jenkins │ v1.37.0 │ 22 Dec 25 01:55 UTC │                     │
	│ ssh     │ -p enable-default-cni-892179 sudo cat /usr/lib/systemd/system/cri-docker.service           │ enable-default-cni-892179 │ jenkins │ v1.37.0 │ 22 Dec 25 01:55 UTC │ 22 Dec 25 01:55 UTC │
	│ ssh     │ -p enable-default-cni-892179 sudo cri-dockerd --version                                    │ enable-default-cni-892179 │ jenkins │ v1.37.0 │ 22 Dec 25 01:55 UTC │ 22 Dec 25 01:55 UTC │
	│ ssh     │ -p enable-default-cni-892179 sudo systemctl status containerd --all --full --no-pager      │ enable-default-cni-892179 │ jenkins │ v1.37.0 │ 22 Dec 25 01:55 UTC │ 22 Dec 25 01:55 UTC │
	│ ssh     │ -p enable-default-cni-892179 sudo systemctl cat containerd --no-pager                      │ enable-default-cni-892179 │ jenkins │ v1.37.0 │ 22 Dec 25 01:55 UTC │ 22 Dec 25 01:55 UTC │
	│ ssh     │ -p enable-default-cni-892179 sudo cat /lib/systemd/system/containerd.service               │ enable-default-cni-892179 │ jenkins │ v1.37.0 │ 22 Dec 25 01:55 UTC │ 22 Dec 25 01:55 UTC │
	│ ssh     │ -p enable-default-cni-892179 sudo cat /etc/containerd/config.toml                          │ enable-default-cni-892179 │ jenkins │ v1.37.0 │ 22 Dec 25 01:55 UTC │ 22 Dec 25 01:55 UTC │
	│ ssh     │ -p enable-default-cni-892179 sudo containerd config dump                                   │ enable-default-cni-892179 │ jenkins │ v1.37.0 │ 22 Dec 25 01:55 UTC │ 22 Dec 25 01:55 UTC │
	│ ssh     │ -p enable-default-cni-892179 sudo systemctl status crio --all --full --no-pager            │ enable-default-cni-892179 │ jenkins │ v1.37.0 │ 22 Dec 25 01:55 UTC │                     │
	│ ssh     │ -p enable-default-cni-892179 sudo systemctl cat crio --no-pager                            │ enable-default-cni-892179 │ jenkins │ v1.37.0 │ 22 Dec 25 01:55 UTC │ 22 Dec 25 01:55 UTC │
	│ ssh     │ -p enable-default-cni-892179 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;  │ enable-default-cni-892179 │ jenkins │ v1.37.0 │ 22 Dec 25 01:55 UTC │ 22 Dec 25 01:55 UTC │
	│ ssh     │ -p enable-default-cni-892179 sudo crio config                                              │ enable-default-cni-892179 │ jenkins │ v1.37.0 │ 22 Dec 25 01:55 UTC │ 22 Dec 25 01:55 UTC │
	│ delete  │ -p enable-default-cni-892179                                                               │ enable-default-cni-892179 │ jenkins │ v1.37.0 │ 22 Dec 25 01:55 UTC │ 22 Dec 25 01:55 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/22 01:54:01
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1222 01:54:01.393729 1747103 out.go:360] Setting OutFile to fd 1 ...
	I1222 01:54:01.393923 1747103 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:54:01.393957 1747103 out.go:374] Setting ErrFile to fd 2...
	I1222 01:54:01.393976 1747103 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:54:01.394296 1747103 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1395000/.minikube/bin
	I1222 01:54:01.394792 1747103 out.go:368] Setting JSON to false
	I1222 01:54:01.395741 1747103 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":117394,"bootTime":1766251047,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1222 01:54:01.395840 1747103 start.go:143] virtualization:  
	I1222 01:54:01.400332 1747103 out.go:179] * [enable-default-cni-892179] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1222 01:54:01.405107 1747103 out.go:179]   - MINIKUBE_LOCATION=22179
	I1222 01:54:01.405191 1747103 notify.go:221] Checking for updates...
	I1222 01:54:01.409488 1747103 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 01:54:01.412689 1747103 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-1395000/kubeconfig
	I1222 01:54:01.415773 1747103 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1395000/.minikube
	I1222 01:54:01.418923 1747103 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1222 01:54:01.422037 1747103 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 01:54:01.425872 1747103 config.go:182] Loaded profile config "no-preload-154186": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1222 01:54:01.426004 1747103 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 01:54:01.461588 1747103 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1222 01:54:01.461732 1747103 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:54:01.518707 1747103 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 01:54:01.509034948 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:54:01.518818 1747103 docker.go:319] overlay module found
	I1222 01:54:01.522135 1747103 out.go:179] * Using the docker driver based on user configuration
	I1222 01:54:01.525115 1747103 start.go:309] selected driver: docker
	I1222 01:54:01.525140 1747103 start.go:928] validating driver "docker" against <nil>
	I1222 01:54:01.525169 1747103 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 01:54:01.525894 1747103 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:54:01.580406 1747103 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 01:54:01.571101717 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:54:01.580573 1747103 start_flags.go:329] no existing cluster config was found, will generate one from the flags 
	E1222 01:54:01.580795 1747103 start_flags.go:484] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I1222 01:54:01.580827 1747103 start_flags.go:995] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1222 01:54:01.583788 1747103 out.go:179] * Using Docker driver with root privileges
	I1222 01:54:01.586736 1747103 cni.go:84] Creating CNI manager for "bridge"
	I1222 01:54:01.586759 1747103 start_flags.go:338] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1222 01:54:01.586850 1747103 start.go:353] cluster config:
	{Name:enable-default-cni-892179 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:enable-default-cni-892179 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:54:01.590218 1747103 out.go:179] * Starting "enable-default-cni-892179" primary control-plane node in "enable-default-cni-892179" cluster
	I1222 01:54:01.593107 1747103 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1222 01:54:01.596071 1747103 out.go:179] * Pulling base image v0.0.48-1766219634-22260 ...
	I1222 01:54:01.598864 1747103 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime containerd
	I1222 01:54:01.598915 1747103 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-containerd-overlay2-arm64.tar.lz4
	I1222 01:54:01.598931 1747103 cache.go:65] Caching tarball of preloaded images
	I1222 01:54:01.598950 1747103 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1222 01:54:01.599011 1747103 preload.go:251] Found /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1222 01:54:01.599022 1747103 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on containerd
	I1222 01:54:01.599136 1747103 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/enable-default-cni-892179/config.json ...
	I1222 01:54:01.599159 1747103 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/enable-default-cni-892179/config.json: {Name:mk365a44d14d663f06e8f50b91470ad9deda37ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:54:01.619211 1747103 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon, skipping pull
	I1222 01:54:01.619238 1747103 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 exists in daemon, skipping load
	I1222 01:54:01.619259 1747103 cache.go:243] Successfully downloaded all kic artifacts
	I1222 01:54:01.619296 1747103 start.go:360] acquireMachinesLock for enable-default-cni-892179: {Name:mk0244a983caee1b6b93ecc358be89d83f03084c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 01:54:01.619427 1747103 start.go:364] duration metric: took 109.088µs to acquireMachinesLock for "enable-default-cni-892179"
	I1222 01:54:01.619458 1747103 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-892179 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:enable-default-cni-892179 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1222 01:54:01.619537 1747103 start.go:125] createHost starting for "" (driver="docker")
	I1222 01:54:01.622928 1747103 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1222 01:54:01.623188 1747103 start.go:159] libmachine.API.Create for "enable-default-cni-892179" (driver="docker")
	I1222 01:54:01.623234 1747103 client.go:173] LocalClient.Create starting
	I1222 01:54:01.623318 1747103 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem
	I1222 01:54:01.623354 1747103 main.go:144] libmachine: Decoding PEM data...
	I1222 01:54:01.623373 1747103 main.go:144] libmachine: Parsing certificate...
	I1222 01:54:01.623434 1747103 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem
	I1222 01:54:01.623455 1747103 main.go:144] libmachine: Decoding PEM data...
	I1222 01:54:01.623467 1747103 main.go:144] libmachine: Parsing certificate...
	I1222 01:54:01.623855 1747103 cli_runner.go:164] Run: docker network inspect enable-default-cni-892179 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1222 01:54:01.640601 1747103 cli_runner.go:211] docker network inspect enable-default-cni-892179 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1222 01:54:01.640685 1747103 network_create.go:284] running [docker network inspect enable-default-cni-892179] to gather additional debugging logs...
	I1222 01:54:01.640715 1747103 cli_runner.go:164] Run: docker network inspect enable-default-cni-892179
	W1222 01:54:01.656146 1747103 cli_runner.go:211] docker network inspect enable-default-cni-892179 returned with exit code 1
	I1222 01:54:01.656178 1747103 network_create.go:287] error running [docker network inspect enable-default-cni-892179]: docker network inspect enable-default-cni-892179: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network enable-default-cni-892179 not found
	I1222 01:54:01.656192 1747103 network_create.go:289] output of [docker network inspect enable-default-cni-892179]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network enable-default-cni-892179 not found
	
	** /stderr **
	I1222 01:54:01.656307 1747103 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 01:54:01.673388 1747103 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-4235b17e4b35 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:fe:6e:e2:e5:a2:3e} reservation:<nil>}
	I1222 01:54:01.673753 1747103 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-f8ef64c3e2a9 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:e2:c4:2a:ab:ba:65} reservation:<nil>}
	I1222 01:54:01.674160 1747103 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-abf79bf53636 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:c2:97:f6:b2:21:18} reservation:<nil>}
	I1222 01:54:01.674681 1747103 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a64eb0}
	I1222 01:54:01.674703 1747103 network_create.go:124] attempt to create docker network enable-default-cni-892179 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1222 01:54:01.674759 1747103 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=enable-default-cni-892179 enable-default-cni-892179
	I1222 01:54:01.741908 1747103 network_create.go:108] docker network enable-default-cni-892179 192.168.76.0/24 created
	I1222 01:54:01.741942 1747103 kic.go:121] calculated static IP "192.168.76.2" for the "enable-default-cni-892179" container
	I1222 01:54:01.742027 1747103 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1222 01:54:01.760303 1747103 cli_runner.go:164] Run: docker volume create enable-default-cni-892179 --label name.minikube.sigs.k8s.io=enable-default-cni-892179 --label created_by.minikube.sigs.k8s.io=true
	I1222 01:54:01.778221 1747103 oci.go:103] Successfully created a docker volume enable-default-cni-892179
	I1222 01:54:01.778321 1747103 cli_runner.go:164] Run: docker run --rm --name enable-default-cni-892179-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-892179 --entrypoint /usr/bin/test -v enable-default-cni-892179:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -d /var/lib
	I1222 01:54:02.352558 1747103 oci.go:107] Successfully prepared a docker volume enable-default-cni-892179
	I1222 01:54:02.352632 1747103 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime containerd
	I1222 01:54:02.352649 1747103 kic.go:194] Starting extracting preloaded images to volume ...
	I1222 01:54:02.352728 1747103 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v enable-default-cni-892179:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -I lz4 -xf /preloaded.tar -C /extractDir
	I1222 01:54:06.374035 1747103 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v enable-default-cni-892179:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -I lz4 -xf /preloaded.tar -C /extractDir: (4.021251347s)
	I1222 01:54:06.374069 1747103 kic.go:203] duration metric: took 4.021417125s to extract preloaded images to volume ...
	W1222 01:54:06.374230 1747103 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1222 01:54:06.374362 1747103 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1222 01:54:06.431438 1747103 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname enable-default-cni-892179 --name enable-default-cni-892179 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-892179 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=enable-default-cni-892179 --network enable-default-cni-892179 --ip 192.168.76.2 --volume enable-default-cni-892179:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5
	I1222 01:54:06.772380 1747103 cli_runner.go:164] Run: docker container inspect enable-default-cni-892179 --format={{.State.Running}}
	I1222 01:54:06.798255 1747103 cli_runner.go:164] Run: docker container inspect enable-default-cni-892179 --format={{.State.Status}}
	I1222 01:54:06.821921 1747103 cli_runner.go:164] Run: docker exec enable-default-cni-892179 stat /var/lib/dpkg/alternatives/iptables
	I1222 01:54:06.873204 1747103 oci.go:144] the created container "enable-default-cni-892179" has a running status.
	I1222 01:54:06.873231 1747103 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/enable-default-cni-892179/id_rsa...
	I1222 01:54:07.335280 1747103 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/enable-default-cni-892179/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1222 01:54:07.356309 1747103 cli_runner.go:164] Run: docker container inspect enable-default-cni-892179 --format={{.State.Status}}
	I1222 01:54:07.373871 1747103 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1222 01:54:07.373897 1747103 kic_runner.go:114] Args: [docker exec --privileged enable-default-cni-892179 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1222 01:54:07.414541 1747103 cli_runner.go:164] Run: docker container inspect enable-default-cni-892179 --format={{.State.Status}}
	I1222 01:54:07.431899 1747103 machine.go:94] provisionDockerMachine start ...
	I1222 01:54:07.432013 1747103 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-892179
	I1222 01:54:07.449206 1747103 main.go:144] libmachine: Using SSH client type: native
	I1222 01:54:07.449555 1747103 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38742 <nil> <nil>}
	I1222 01:54:07.449571 1747103 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 01:54:07.450274 1747103 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1222 01:54:10.581709 1747103 main.go:144] libmachine: SSH cmd err, output: <nil>: enable-default-cni-892179
	
	I1222 01:54:10.581736 1747103 ubuntu.go:182] provisioning hostname "enable-default-cni-892179"
	I1222 01:54:10.581810 1747103 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-892179
	I1222 01:54:10.600187 1747103 main.go:144] libmachine: Using SSH client type: native
	I1222 01:54:10.600508 1747103 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38742 <nil> <nil>}
	I1222 01:54:10.600526 1747103 main.go:144] libmachine: About to run SSH command:
	sudo hostname enable-default-cni-892179 && echo "enable-default-cni-892179" | sudo tee /etc/hostname
	I1222 01:54:10.748481 1747103 main.go:144] libmachine: SSH cmd err, output: <nil>: enable-default-cni-892179
	
	I1222 01:54:10.748575 1747103 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-892179
	I1222 01:54:10.768291 1747103 main.go:144] libmachine: Using SSH client type: native
	I1222 01:54:10.768621 1747103 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil>  [] 0s} 127.0.0.1 38742 <nil> <nil>}
	I1222 01:54:10.768643 1747103 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\senable-default-cni-892179' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 enable-default-cni-892179/g' /etc/hosts;
				else 
					echo '127.0.1.1 enable-default-cni-892179' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1222 01:54:10.906763 1747103 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 01:54:10.906868 1747103 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22179-1395000/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-1395000/.minikube}
	I1222 01:54:10.906929 1747103 ubuntu.go:190] setting up certificates
	I1222 01:54:10.906968 1747103 provision.go:84] configureAuth start
	I1222 01:54:10.907083 1747103 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" enable-default-cni-892179
	I1222 01:54:10.926353 1747103 provision.go:143] copyHostCerts
	I1222 01:54:10.926439 1747103 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.pem, removing ...
	I1222 01:54:10.926454 1747103 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.pem
	I1222 01:54:10.926535 1747103 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.pem (1082 bytes)
	I1222 01:54:10.926631 1747103 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1395000/.minikube/cert.pem, removing ...
	I1222 01:54:10.926640 1747103 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1395000/.minikube/cert.pem
	I1222 01:54:10.926665 1747103 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-1395000/.minikube/cert.pem (1123 bytes)
	I1222 01:54:10.926726 1747103 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1395000/.minikube/key.pem, removing ...
	I1222 01:54:10.926734 1747103 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1395000/.minikube/key.pem
	I1222 01:54:10.926758 1747103 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-1395000/.minikube/key.pem (1679 bytes)
	I1222 01:54:10.926809 1747103 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca-key.pem org=jenkins.enable-default-cni-892179 san=[127.0.0.1 192.168.76.2 enable-default-cni-892179 localhost minikube]
	I1222 01:54:11.042779 1747103 provision.go:177] copyRemoteCerts
	I1222 01:54:11.042909 1747103 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1222 01:54:11.042987 1747103 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-892179
	I1222 01:54:11.065860 1747103 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38742 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/enable-default-cni-892179/id_rsa Username:docker}
	I1222 01:54:11.162827 1747103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1222 01:54:11.183033 1747103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1222 01:54:11.201969 1747103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1222 01:54:11.220891 1747103 provision.go:87] duration metric: took 313.880106ms to configureAuth
	I1222 01:54:11.220925 1747103 ubuntu.go:206] setting minikube options for container-runtime
	I1222 01:54:11.221122 1747103 config.go:182] Loaded profile config "enable-default-cni-892179": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1222 01:54:11.221137 1747103 machine.go:97] duration metric: took 3.789213796s to provisionDockerMachine
	I1222 01:54:11.221144 1747103 client.go:176] duration metric: took 9.597901162s to LocalClient.Create
	I1222 01:54:11.221159 1747103 start.go:167] duration metric: took 9.59797286s to libmachine.API.Create "enable-default-cni-892179"
	I1222 01:54:11.221169 1747103 start.go:293] postStartSetup for "enable-default-cni-892179" (driver="docker")
	I1222 01:54:11.221179 1747103 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1222 01:54:11.221245 1747103 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1222 01:54:11.221290 1747103 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-892179
	I1222 01:54:11.239264 1747103 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38742 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/enable-default-cni-892179/id_rsa Username:docker}
	I1222 01:54:11.338675 1747103 ssh_runner.go:195] Run: cat /etc/os-release
	I1222 01:54:11.342347 1747103 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1222 01:54:11.342378 1747103 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1222 01:54:11.342391 1747103 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1395000/.minikube/addons for local assets ...
	I1222 01:54:11.342449 1747103 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1395000/.minikube/files for local assets ...
	I1222 01:54:11.342540 1747103 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem -> 13968642.pem in /etc/ssl/certs
	I1222 01:54:11.342650 1747103 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1222 01:54:11.350701 1747103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem --> /etc/ssl/certs/13968642.pem (1708 bytes)
	I1222 01:54:11.368971 1747103 start.go:296] duration metric: took 147.786702ms for postStartSetup
	I1222 01:54:11.369394 1747103 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" enable-default-cni-892179
	I1222 01:54:11.387195 1747103 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/enable-default-cni-892179/config.json ...
	I1222 01:54:11.387501 1747103 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 01:54:11.387583 1747103 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-892179
	I1222 01:54:11.405449 1747103 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38742 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/enable-default-cni-892179/id_rsa Username:docker}
	I1222 01:54:11.499442 1747103 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1222 01:54:11.504527 1747103 start.go:128] duration metric: took 9.884972733s to createHost
	I1222 01:54:11.504552 1747103 start.go:83] releasing machines lock for "enable-default-cni-892179", held for 9.885111728s
	I1222 01:54:11.504633 1747103 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" enable-default-cni-892179
	I1222 01:54:11.522379 1747103 ssh_runner.go:195] Run: cat /version.json
	I1222 01:54:11.522416 1747103 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1222 01:54:11.522435 1747103 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-892179
	I1222 01:54:11.522494 1747103 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-892179
	I1222 01:54:11.542588 1747103 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38742 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/enable-default-cni-892179/id_rsa Username:docker}
	I1222 01:54:11.548855 1747103 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38742 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/enable-default-cni-892179/id_rsa Username:docker}
	I1222 01:54:11.633961 1747103 ssh_runner.go:195] Run: systemctl --version
	I1222 01:54:11.754137 1747103 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1222 01:54:11.761464 1747103 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1222 01:54:11.761550 1747103 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1222 01:54:11.797748 1747103 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1222 01:54:11.797777 1747103 start.go:496] detecting cgroup driver to use...
	I1222 01:54:11.797811 1747103 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 01:54:11.797868 1747103 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1222 01:54:11.815815 1747103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1222 01:54:11.829534 1747103 docker.go:218] disabling cri-docker service (if available) ...
	I1222 01:54:11.829604 1747103 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1222 01:54:11.848030 1747103 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1222 01:54:11.866908 1747103 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1222 01:54:11.979178 1747103 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1222 01:54:12.107922 1747103 docker.go:234] disabling docker service ...
	I1222 01:54:12.107998 1747103 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1222 01:54:12.130609 1747103 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1222 01:54:12.144449 1747103 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1222 01:54:12.267941 1747103 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1222 01:54:12.383728 1747103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1222 01:54:12.397988 1747103 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 01:54:12.412912 1747103 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1222 01:54:12.422705 1747103 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1222 01:54:12.432030 1747103 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1222 01:54:12.432108 1747103 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1222 01:54:12.441846 1747103 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1222 01:54:12.450773 1747103 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1222 01:54:12.460321 1747103 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1222 01:54:12.470185 1747103 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1222 01:54:12.479263 1747103 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1222 01:54:12.489629 1747103 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1222 01:54:12.502565 1747103 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1222 01:54:12.512496 1747103 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1222 01:54:12.521345 1747103 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1222 01:54:12.530478 1747103 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:54:12.645803 1747103 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1222 01:54:12.777545 1747103 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1222 01:54:12.777629 1747103 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1222 01:54:12.781439 1747103 start.go:564] Will wait 60s for crictl version
	I1222 01:54:12.781511 1747103 ssh_runner.go:195] Run: which crictl
	I1222 01:54:12.785050 1747103 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1222 01:54:12.811719 1747103 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.1
	RuntimeApiVersion:  v1
	I1222 01:54:12.811800 1747103 ssh_runner.go:195] Run: containerd --version
	I1222 01:54:12.834062 1747103 ssh_runner.go:195] Run: containerd --version
	I1222 01:54:12.858536 1747103 out.go:179] * Preparing Kubernetes v1.34.3 on containerd 2.2.1 ...
	I1222 01:54:12.861482 1747103 cli_runner.go:164] Run: docker network inspect enable-default-cni-892179 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 01:54:12.877932 1747103 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1222 01:54:12.882030 1747103 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 01:54:12.892940 1747103 kubeadm.go:884] updating cluster {Name:enable-default-cni-892179 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:enable-default-cni-892179 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreD
NSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1222 01:54:12.893069 1747103 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime containerd
	I1222 01:54:12.893143 1747103 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 01:54:12.922524 1747103 containerd.go:627] all images are preloaded for containerd runtime.
	I1222 01:54:12.922549 1747103 containerd.go:534] Images already preloaded, skipping extraction
	I1222 01:54:12.922612 1747103 ssh_runner.go:195] Run: sudo crictl images --output json
	I1222 01:54:12.947512 1747103 containerd.go:627] all images are preloaded for containerd runtime.
	I1222 01:54:12.947549 1747103 cache_images.go:86] Images are preloaded, skipping loading
	I1222 01:54:12.947558 1747103 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.3 containerd true true} ...
	I1222 01:54:12.947655 1747103 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=enable-default-cni-892179 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:enable-default-cni-892179 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I1222 01:54:12.947722 1747103 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
	I1222 01:54:12.974100 1747103 cni.go:84] Creating CNI manager for "bridge"
	I1222 01:54:12.974138 1747103 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1222 01:54:12.974191 1747103 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:enable-default-cni-892179 NodeName:enable-default-cni-892179 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1222 01:54:12.974353 1747103 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "enable-default-cni-892179"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1222 01:54:12.974428 1747103 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1222 01:54:12.982440 1747103 binaries.go:51] Found k8s binaries, skipping transfer
	I1222 01:54:12.982578 1747103 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1222 01:54:12.990728 1747103 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (329 bytes)
	I1222 01:54:13.006191 1747103 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1222 01:54:13.021163 1747103 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2238 bytes)
	I1222 01:54:13.034824 1747103 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1222 01:54:13.038725 1747103 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 01:54:13.048670 1747103 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:54:13.160365 1747103 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 01:54:13.177727 1747103 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/enable-default-cni-892179 for IP: 192.168.76.2
	I1222 01:54:13.177753 1747103 certs.go:195] generating shared ca certs ...
	I1222 01:54:13.177771 1747103 certs.go:227] acquiring lock for ca certs: {Name:mk4e1172e73c8d9b926824a39d7e920772302ed7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:54:13.177973 1747103 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.key
	I1222 01:54:13.178048 1747103 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.key
	I1222 01:54:13.178067 1747103 certs.go:257] generating profile certs ...
	I1222 01:54:13.178201 1747103 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/enable-default-cni-892179/client.key
	I1222 01:54:13.178222 1747103 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/enable-default-cni-892179/client.crt with IP's: []
	I1222 01:54:13.435501 1747103 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/enable-default-cni-892179/client.crt ...
	I1222 01:54:13.435590 1747103 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/enable-default-cni-892179/client.crt: {Name:mk1c376dbc38a4656a9d39d7d267072414d94b90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:54:13.435872 1747103 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/enable-default-cni-892179/client.key ...
	I1222 01:54:13.435892 1747103 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/enable-default-cni-892179/client.key: {Name:mkfde74fe02b588af61e5138fc46ae0df092330a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:54:13.436044 1747103 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/enable-default-cni-892179/apiserver.key.2086c42f
	I1222 01:54:13.436069 1747103 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/enable-default-cni-892179/apiserver.crt.2086c42f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1222 01:54:13.802701 1747103 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/enable-default-cni-892179/apiserver.crt.2086c42f ...
	I1222 01:54:13.802737 1747103 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/enable-default-cni-892179/apiserver.crt.2086c42f: {Name:mk49bdd46910652122da4055de555ed5babba07b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:54:13.802943 1747103 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/enable-default-cni-892179/apiserver.key.2086c42f ...
	I1222 01:54:13.802960 1747103 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/enable-default-cni-892179/apiserver.key.2086c42f: {Name:mkb4e245816b9a842e59f3511c2e8197b7b5d2ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:54:13.803052 1747103 certs.go:382] copying /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/enable-default-cni-892179/apiserver.crt.2086c42f -> /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/enable-default-cni-892179/apiserver.crt
	I1222 01:54:13.803143 1747103 certs.go:386] copying /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/enable-default-cni-892179/apiserver.key.2086c42f -> /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/enable-default-cni-892179/apiserver.key
	I1222 01:54:13.803209 1747103 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/enable-default-cni-892179/proxy-client.key
	I1222 01:54:13.803232 1747103 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/enable-default-cni-892179/proxy-client.crt with IP's: []
	I1222 01:54:13.957777 1747103 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/enable-default-cni-892179/proxy-client.crt ...
	I1222 01:54:13.957812 1747103 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/enable-default-cni-892179/proxy-client.crt: {Name:mkb64405a9c295771a8b9f95621143c94e6bb47c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:54:13.958018 1747103 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/enable-default-cni-892179/proxy-client.key ...
	I1222 01:54:13.958033 1747103 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/enable-default-cni-892179/proxy-client.key: {Name:mk58d18774545284469f683a360da9f5ec652aed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:54:13.958275 1747103 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/1396864.pem (1338 bytes)
	W1222 01:54:13.958322 1747103 certs.go:480] ignoring /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/1396864_empty.pem, impossibly tiny 0 bytes
	I1222 01:54:13.958335 1747103 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca-key.pem (1675 bytes)
	I1222 01:54:13.958361 1747103 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem (1082 bytes)
	I1222 01:54:13.958389 1747103 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem (1123 bytes)
	I1222 01:54:13.958417 1747103 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/key.pem (1679 bytes)
	I1222 01:54:13.958472 1747103 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem (1708 bytes)
	I1222 01:54:13.959096 1747103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1222 01:54:13.977927 1747103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1222 01:54:14.021558 1747103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1222 01:54:14.050655 1747103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1222 01:54:14.079338 1747103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/enable-default-cni-892179/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1222 01:54:14.101220 1747103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/enable-default-cni-892179/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1222 01:54:14.120037 1747103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/enable-default-cni-892179/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1222 01:54:14.138672 1747103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/enable-default-cni-892179/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1222 01:54:14.156805 1747103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem --> /usr/share/ca-certificates/13968642.pem (1708 bytes)
	I1222 01:54:14.175229 1747103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1222 01:54:14.192990 1747103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/1396864.pem --> /usr/share/ca-certificates/1396864.pem (1338 bytes)
	I1222 01:54:14.210669 1747103 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1222 01:54:14.223738 1747103 ssh_runner.go:195] Run: openssl version
	I1222 01:54:14.230300 1747103 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/13968642.pem
	I1222 01:54:14.237860 1747103 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/13968642.pem /etc/ssl/certs/13968642.pem
	I1222 01:54:14.245387 1747103 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13968642.pem
	I1222 01:54:14.249219 1747103 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 00:13 /usr/share/ca-certificates/13968642.pem
	I1222 01:54:14.249337 1747103 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13968642.pem
	I1222 01:54:14.291510 1747103 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1222 01:54:14.299038 1747103 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/13968642.pem /etc/ssl/certs/3ec20f2e.0
	I1222 01:54:14.306510 1747103 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:54:14.313919 1747103 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1222 01:54:14.321620 1747103 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:54:14.325590 1747103 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 00:04 /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:54:14.325686 1747103 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1222 01:54:14.367545 1747103 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1222 01:54:14.375346 1747103 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1222 01:54:14.383041 1747103 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1396864.pem
	I1222 01:54:14.390729 1747103 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1396864.pem /etc/ssl/certs/1396864.pem
	I1222 01:54:14.398538 1747103 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1396864.pem
	I1222 01:54:14.402466 1747103 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 00:13 /usr/share/ca-certificates/1396864.pem
	I1222 01:54:14.402532 1747103 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1396864.pem
	I1222 01:54:14.443676 1747103 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1222 01:54:14.451047 1747103 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1396864.pem /etc/ssl/certs/51391683.0
	I1222 01:54:14.458347 1747103 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 01:54:14.461791 1747103 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1222 01:54:14.461842 1747103 kubeadm.go:401] StartCluster: {Name:enable-default-cni-892179 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:enable-default-cni-892179 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSL
og:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 01:54:14.461919 1747103 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1222 01:54:14.461983 1747103 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1222 01:54:14.491866 1747103 cri.go:96] found id: ""
	I1222 01:54:14.492019 1747103 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1222 01:54:14.500133 1747103 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1222 01:54:14.508069 1747103 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1222 01:54:14.508173 1747103 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 01:54:14.516264 1747103 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1222 01:54:14.516288 1747103 kubeadm.go:158] found existing configuration files:
	
	I1222 01:54:14.516346 1747103 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1222 01:54:14.524032 1747103 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1222 01:54:14.524152 1747103 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1222 01:54:14.531944 1747103 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1222 01:54:14.539864 1747103 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1222 01:54:14.539963 1747103 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 01:54:14.547697 1747103 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1222 01:54:14.556201 1747103 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1222 01:54:14.556277 1747103 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 01:54:14.563562 1747103 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1222 01:54:14.571401 1747103 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1222 01:54:14.571534 1747103 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 01:54:14.578643 1747103 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1222 01:54:14.621094 1747103 kubeadm.go:319] [init] Using Kubernetes version: v1.34.3
	I1222 01:54:14.621431 1747103 kubeadm.go:319] [preflight] Running pre-flight checks
	I1222 01:54:14.643788 1747103 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1222 01:54:14.643875 1747103 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1222 01:54:14.643915 1747103 kubeadm.go:319] OS: Linux
	I1222 01:54:14.643973 1747103 kubeadm.go:319] CGROUPS_CPU: enabled
	I1222 01:54:14.644027 1747103 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1222 01:54:14.644080 1747103 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1222 01:54:14.644134 1747103 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1222 01:54:14.644187 1747103 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1222 01:54:14.644238 1747103 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1222 01:54:14.644291 1747103 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1222 01:54:14.644341 1747103 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1222 01:54:14.644392 1747103 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1222 01:54:14.711008 1747103 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1222 01:54:14.711122 1747103 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1222 01:54:14.711217 1747103 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1222 01:54:14.722492 1747103 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1222 01:54:14.729449 1747103 out.go:252]   - Generating certificates and keys ...
	I1222 01:54:14.729548 1747103 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1222 01:54:14.729621 1747103 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1222 01:54:15.876193 1747103 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1222 01:54:16.475078 1747103 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1222 01:54:17.153773 1747103 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1222 01:54:17.801300 1747103 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1222 01:54:18.185337 1747103 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1222 01:54:18.185807 1747103 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [enable-default-cni-892179 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1222 01:54:18.423522 1747103 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1222 01:54:18.423897 1747103 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [enable-default-cni-892179 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1222 01:54:18.845648 1747103 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1222 01:54:19.572856 1747103 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1222 01:54:19.729968 1747103 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1222 01:54:19.730263 1747103 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1222 01:54:19.832186 1747103 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1222 01:54:20.325987 1747103 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1222 01:54:20.458212 1747103 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1222 01:54:20.954307 1747103 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1222 01:54:21.414498 1747103 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1222 01:54:21.415091 1747103 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1222 01:54:21.417755 1747103 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1222 01:54:21.421403 1747103 out.go:252]   - Booting up control plane ...
	I1222 01:54:21.421512 1747103 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1222 01:54:21.421602 1747103 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1222 01:54:21.421677 1747103 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1222 01:54:21.438586 1747103 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1222 01:54:21.438697 1747103 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1222 01:54:21.447205 1747103 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1222 01:54:21.448362 1747103 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1222 01:54:21.448655 1747103 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1222 01:54:21.609249 1747103 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1222 01:54:21.609371 1747103 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1222 01:54:22.611101 1747103 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001945092s
	I1222 01:54:22.614970 1747103 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1222 01:54:22.615072 1747103 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1222 01:54:22.615168 1747103 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1222 01:54:22.615247 1747103 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1222 01:54:27.133433 1747103 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.517790236s
	I1222 01:54:29.617317 1747103 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 7.002194241s
	I1222 01:54:30.583896 1747103 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 7.968955496s
	I1222 01:54:30.628162 1747103 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1222 01:54:30.641893 1747103 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1222 01:54:30.657209 1747103 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1222 01:54:30.657669 1747103 kubeadm.go:319] [mark-control-plane] Marking the node enable-default-cni-892179 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1222 01:54:30.670188 1747103 kubeadm.go:319] [bootstrap-token] Using token: 9ld786.4gn87rf90s0pq733
	I1222 01:54:30.675190 1747103 out.go:252]   - Configuring RBAC rules ...
	I1222 01:54:30.675329 1747103 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1222 01:54:30.682960 1747103 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1222 01:54:30.692510 1747103 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1222 01:54:30.697992 1747103 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1222 01:54:30.702852 1747103 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1222 01:54:30.707432 1747103 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1222 01:54:30.991897 1747103 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1222 01:54:31.448684 1747103 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1222 01:54:31.995755 1747103 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1222 01:54:31.997504 1747103 kubeadm.go:319] 
	I1222 01:54:31.997585 1747103 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1222 01:54:31.997596 1747103 kubeadm.go:319] 
	I1222 01:54:31.997674 1747103 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1222 01:54:31.997683 1747103 kubeadm.go:319] 
	I1222 01:54:31.997709 1747103 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1222 01:54:31.998270 1747103 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1222 01:54:31.998335 1747103 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1222 01:54:31.998347 1747103 kubeadm.go:319] 
	I1222 01:54:31.998403 1747103 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1222 01:54:31.998427 1747103 kubeadm.go:319] 
	I1222 01:54:31.998481 1747103 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1222 01:54:31.998490 1747103 kubeadm.go:319] 
	I1222 01:54:31.998543 1747103 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1222 01:54:31.998622 1747103 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1222 01:54:31.998697 1747103 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1222 01:54:31.998705 1747103 kubeadm.go:319] 
	I1222 01:54:31.999048 1747103 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1222 01:54:31.999141 1747103 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1222 01:54:31.999151 1747103 kubeadm.go:319] 
	I1222 01:54:31.999439 1747103 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 9ld786.4gn87rf90s0pq733 \
	I1222 01:54:31.999563 1747103 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:55a75a4878aa9ec4082586970e7505f4492fcc0138e33ff8e472e16a7e145535 \
	I1222 01:54:31.999786 1747103 kubeadm.go:319] 	--control-plane 
	I1222 01:54:31.999807 1747103 kubeadm.go:319] 
	I1222 01:54:32.000096 1747103 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1222 01:54:32.000110 1747103 kubeadm.go:319] 
	I1222 01:54:32.000408 1747103 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 9ld786.4gn87rf90s0pq733 \
	I1222 01:54:32.000715 1747103 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:55a75a4878aa9ec4082586970e7505f4492fcc0138e33ff8e472e16a7e145535 
	I1222 01:54:32.011127 1747103 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1222 01:54:32.011376 1747103 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1222 01:54:32.011494 1747103 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1222 01:54:32.011592 1747103 cni.go:84] Creating CNI manager for "bridge"
	I1222 01:54:32.016622 1747103 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1222 01:54:32.020384 1747103 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1222 01:54:32.034041 1747103 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1222 01:54:32.060911 1747103 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1222 01:54:32.061040 1747103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 01:54:32.061136 1747103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes enable-default-cni-892179 minikube.k8s.io/updated_at=2025_12_22T01_54_32_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=606da7122583b5a79b82859b38097457cda6198c minikube.k8s.io/name=enable-default-cni-892179 minikube.k8s.io/primary=true
	I1222 01:54:32.223300 1747103 ops.go:34] apiserver oom_adj: -16
	I1222 01:54:32.223402 1747103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 01:54:32.723581 1747103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 01:54:33.224307 1747103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 01:54:33.723833 1747103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 01:54:34.224292 1747103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 01:54:34.723843 1747103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 01:54:35.224290 1747103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 01:54:35.724244 1747103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 01:54:36.224432 1747103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 01:54:36.347086 1747103 kubeadm.go:1114] duration metric: took 4.286092741s to wait for elevateKubeSystemPrivileges
	I1222 01:54:36.347120 1747103 kubeadm.go:403] duration metric: took 21.885282309s to StartCluster
	I1222 01:54:36.347140 1747103 settings.go:142] acquiring lock: {Name:mk10fbc51e5384d725fd46ab36048358281cb87b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:54:36.347221 1747103 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22179-1395000/kubeconfig
	I1222 01:54:36.348239 1747103 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/kubeconfig: {Name:mk08a9ce05fb3d2aec66c2d4776a38c1f64c42df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 01:54:36.348490 1747103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1222 01:54:36.348506 1747103 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1222 01:54:36.348568 1747103 addons.go:70] Setting storage-provisioner=true in profile "enable-default-cni-892179"
	I1222 01:54:36.348488 1747103 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1222 01:54:36.348582 1747103 addons.go:239] Setting addon storage-provisioner=true in "enable-default-cni-892179"
	I1222 01:54:36.348796 1747103 config.go:182] Loaded profile config "enable-default-cni-892179": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1222 01:54:36.348839 1747103 addons.go:70] Setting default-storageclass=true in profile "enable-default-cni-892179"
	I1222 01:54:36.348855 1747103 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "enable-default-cni-892179"
	I1222 01:54:36.349162 1747103 cli_runner.go:164] Run: docker container inspect enable-default-cni-892179 --format={{.State.Status}}
	I1222 01:54:36.348603 1747103 host.go:66] Checking if "enable-default-cni-892179" exists ...
	I1222 01:54:36.350055 1747103 cli_runner.go:164] Run: docker container inspect enable-default-cni-892179 --format={{.State.Status}}
	I1222 01:54:36.353914 1747103 out.go:179] * Verifying Kubernetes components...
	I1222 01:54:36.356868 1747103 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 01:54:36.390738 1747103 addons.go:239] Setting addon default-storageclass=true in "enable-default-cni-892179"
	I1222 01:54:36.390778 1747103 host.go:66] Checking if "enable-default-cni-892179" exists ...
	I1222 01:54:36.391193 1747103 cli_runner.go:164] Run: docker container inspect enable-default-cni-892179 --format={{.State.Status}}
	I1222 01:54:36.398156 1747103 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1222 01:54:36.402369 1747103 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 01:54:36.402404 1747103 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1222 01:54:36.402498 1747103 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-892179
	I1222 01:54:36.442523 1747103 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1222 01:54:36.442550 1747103 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1222 01:54:36.442614 1747103 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-892179
	I1222 01:54:36.457773 1747103 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38742 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/enable-default-cni-892179/id_rsa Username:docker}
	I1222 01:54:36.483981 1747103 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38742 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/enable-default-cni-892179/id_rsa Username:docker}
	I1222 01:54:36.874812 1747103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1222 01:54:36.874935 1747103 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 01:54:36.897125 1747103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1222 01:54:36.943411 1747103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 01:54:37.836519 1747103 node_ready.go:35] waiting up to 15m0s for node "enable-default-cni-892179" to be "Ready" ...
	I1222 01:54:37.836632 1747103 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1222 01:54:37.866602 1747103 node_ready.go:49] node "enable-default-cni-892179" is "Ready"
	I1222 01:54:37.866634 1747103 node_ready.go:38] duration metric: took 29.09997ms for node "enable-default-cni-892179" to be "Ready" ...
	I1222 01:54:37.866649 1747103 api_server.go:52] waiting for apiserver process to appear ...
	I1222 01:54:37.866730 1747103 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:54:38.112644 1747103 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.169144849s)
	I1222 01:54:38.112740 1747103 api_server.go:72] duration metric: took 1.764143694s to wait for apiserver process to appear ...
	I1222 01:54:38.112771 1747103 api_server.go:88] waiting for apiserver healthz status ...
	I1222 01:54:38.112805 1747103 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1222 01:54:38.117890 1747103 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1222 01:54:38.121610 1747103 addons.go:530] duration metric: took 1.773086246s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1222 01:54:38.122795 1747103 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1222 01:54:38.124126 1747103 api_server.go:141] control plane version: v1.34.3
	I1222 01:54:38.124157 1747103 api_server.go:131] duration metric: took 11.366524ms to wait for apiserver health ...
	I1222 01:54:38.124168 1747103 system_pods.go:43] waiting for kube-system pods to appear ...
	I1222 01:54:38.132531 1747103 system_pods.go:59] 8 kube-system pods found
	I1222 01:54:38.132570 1747103 system_pods.go:61] "coredns-66bc5c9577-cxggz" [e32c0780-1de9-46ab-9ca9-740cd18277b6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1222 01:54:38.132581 1747103 system_pods.go:61] "coredns-66bc5c9577-v597g" [f20c8c1d-740e-4ae3-9cdf-382931e2b62a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1222 01:54:38.132587 1747103 system_pods.go:61] "etcd-enable-default-cni-892179" [4fd6f972-fb3e-462b-9e36-3067ec7c49e8] Running
	I1222 01:54:38.132600 1747103 system_pods.go:61] "kube-apiserver-enable-default-cni-892179" [8b1a98a2-2830-4dbf-a8cf-46bbcb15aeba] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1222 01:54:38.132604 1747103 system_pods.go:61] "kube-controller-manager-enable-default-cni-892179" [b4c7a918-897b-44e0-a08a-40db159fe1b7] Running
	I1222 01:54:38.132609 1747103 system_pods.go:61] "kube-proxy-bs8zg" [9aad02ef-6d65-441d-9a55-a9df2dc6b961] Running
	I1222 01:54:38.132615 1747103 system_pods.go:61] "kube-scheduler-enable-default-cni-892179" [72e23033-b596-4e3e-9340-47895afd1ae8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1222 01:54:38.132625 1747103 system_pods.go:61] "storage-provisioner" [8ecb74aa-91be-437c-8a2d-f77dda0b5a4d] Pending
	I1222 01:54:38.132632 1747103 system_pods.go:74] duration metric: took 8.457986ms to wait for pod list to return data ...
	I1222 01:54:38.132651 1747103 default_sa.go:34] waiting for default service account to be created ...
	I1222 01:54:38.141138 1747103 default_sa.go:45] found service account: "default"
	I1222 01:54:38.141170 1747103 default_sa.go:55] duration metric: took 8.511115ms for default service account to be created ...
	I1222 01:54:38.141182 1747103 system_pods.go:116] waiting for k8s-apps to be running ...
	I1222 01:54:38.146230 1747103 system_pods.go:86] 8 kube-system pods found
	I1222 01:54:38.146270 1747103 system_pods.go:89] "coredns-66bc5c9577-cxggz" [e32c0780-1de9-46ab-9ca9-740cd18277b6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1222 01:54:38.146285 1747103 system_pods.go:89] "coredns-66bc5c9577-v597g" [f20c8c1d-740e-4ae3-9cdf-382931e2b62a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1222 01:54:38.146293 1747103 system_pods.go:89] "etcd-enable-default-cni-892179" [4fd6f972-fb3e-462b-9e36-3067ec7c49e8] Running
	I1222 01:54:38.146301 1747103 system_pods.go:89] "kube-apiserver-enable-default-cni-892179" [8b1a98a2-2830-4dbf-a8cf-46bbcb15aeba] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1222 01:54:38.146307 1747103 system_pods.go:89] "kube-controller-manager-enable-default-cni-892179" [b4c7a918-897b-44e0-a08a-40db159fe1b7] Running
	I1222 01:54:38.146317 1747103 system_pods.go:89] "kube-proxy-bs8zg" [9aad02ef-6d65-441d-9a55-a9df2dc6b961] Running
	I1222 01:54:38.146323 1747103 system_pods.go:89] "kube-scheduler-enable-default-cni-892179" [72e23033-b596-4e3e-9340-47895afd1ae8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1222 01:54:38.146337 1747103 system_pods.go:89] "storage-provisioner" [8ecb74aa-91be-437c-8a2d-f77dda0b5a4d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1222 01:54:38.146345 1747103 system_pods.go:126] duration metric: took 5.157583ms to wait for k8s-apps to be running ...
	I1222 01:54:38.146357 1747103 system_svc.go:44] waiting for kubelet service to be running ....
	I1222 01:54:38.146419 1747103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 01:54:38.161074 1747103 system_svc.go:56] duration metric: took 14.707337ms WaitForService to wait for kubelet
	I1222 01:54:38.161102 1747103 kubeadm.go:587] duration metric: took 1.812508866s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1222 01:54:38.161121 1747103 node_conditions.go:102] verifying NodePressure condition ...
	I1222 01:54:38.164441 1747103 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1222 01:54:38.164473 1747103 node_conditions.go:123] node cpu capacity is 2
	I1222 01:54:38.164487 1747103 node_conditions.go:105] duration metric: took 3.359061ms to run NodePressure ...
	I1222 01:54:38.164501 1747103 start.go:242] waiting for startup goroutines ...
	I1222 01:54:38.343311 1747103 kapi.go:214] "coredns" deployment in "kube-system" namespace and "enable-default-cni-892179" context rescaled to 1 replicas
	I1222 01:54:38.343356 1747103 start.go:247] waiting for cluster config update ...
	I1222 01:54:38.343378 1747103 start.go:256] writing updated cluster config ...
	I1222 01:54:38.343805 1747103 ssh_runner.go:195] Run: rm -f paused
	I1222 01:54:38.349659 1747103 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1222 01:54:38.355838 1747103 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-cxggz" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:54:38.861860 1747103 pod_ready.go:94] pod "coredns-66bc5c9577-cxggz" is "Ready"
	I1222 01:54:38.861893 1747103 pod_ready.go:86] duration metric: took 506.024107ms for pod "coredns-66bc5c9577-cxggz" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:54:38.861905 1747103 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-v597g" in "kube-system" namespace to be "Ready" or be gone ...
	W1222 01:54:40.868126 1747103 pod_ready.go:104] pod "coredns-66bc5c9577-v597g" is not "Ready", error: <nil>
	W1222 01:54:43.370386 1747103 pod_ready.go:104] pod "coredns-66bc5c9577-v597g" is not "Ready", error: <nil>
	W1222 01:54:45.868752 1747103 pod_ready.go:104] pod "coredns-66bc5c9577-v597g" is not "Ready", error: <nil>
	W1222 01:54:48.368226 1747103 pod_ready.go:104] pod "coredns-66bc5c9577-v597g" is not "Ready", error: <nil>
	W1222 01:54:50.867621 1747103 pod_ready.go:104] pod "coredns-66bc5c9577-v597g" is not "Ready", error: <nil>
	W1222 01:54:53.367186 1747103 pod_ready.go:104] pod "coredns-66bc5c9577-v597g" is not "Ready", error: <nil>
	W1222 01:54:55.367643 1747103 pod_ready.go:104] pod "coredns-66bc5c9577-v597g" is not "Ready", error: <nil>
	W1222 01:54:57.368032 1747103 pod_ready.go:104] pod "coredns-66bc5c9577-v597g" is not "Ready", error: <nil>
	W1222 01:54:59.368134 1747103 pod_ready.go:104] pod "coredns-66bc5c9577-v597g" is not "Ready", error: <nil>
	W1222 01:55:01.867965 1747103 pod_ready.go:104] pod "coredns-66bc5c9577-v597g" is not "Ready", error: <nil>
	W1222 01:55:04.368397 1747103 pod_ready.go:104] pod "coredns-66bc5c9577-v597g" is not "Ready", error: <nil>
	W1222 01:55:06.867371 1747103 pod_ready.go:104] pod "coredns-66bc5c9577-v597g" is not "Ready", error: <nil>
	W1222 01:55:08.868890 1747103 pod_ready.go:104] pod "coredns-66bc5c9577-v597g" is not "Ready", error: <nil>
	W1222 01:55:11.367543 1747103 pod_ready.go:104] pod "coredns-66bc5c9577-v597g" is not "Ready", error: <nil>
	W1222 01:55:13.868707 1747103 pod_ready.go:104] pod "coredns-66bc5c9577-v597g" is not "Ready", error: <nil>
	W1222 01:55:16.367462 1747103 pod_ready.go:104] pod "coredns-66bc5c9577-v597g" is not "Ready", error: <nil>
	I1222 01:55:18.867221 1747103 pod_ready.go:94] pod "coredns-66bc5c9577-v597g" is "Ready"
	I1222 01:55:18.867252 1747103 pod_ready.go:86] duration metric: took 40.005340352s for pod "coredns-66bc5c9577-v597g" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:55:18.870319 1747103 pod_ready.go:83] waiting for pod "etcd-enable-default-cni-892179" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:55:18.875014 1747103 pod_ready.go:94] pod "etcd-enable-default-cni-892179" is "Ready"
	I1222 01:55:18.875045 1747103 pod_ready.go:86] duration metric: took 4.697674ms for pod "etcd-enable-default-cni-892179" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:55:18.877393 1747103 pod_ready.go:83] waiting for pod "kube-apiserver-enable-default-cni-892179" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:55:18.882190 1747103 pod_ready.go:94] pod "kube-apiserver-enable-default-cni-892179" is "Ready"
	I1222 01:55:18.882266 1747103 pod_ready.go:86] duration metric: took 4.841183ms for pod "kube-apiserver-enable-default-cni-892179" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:55:18.884939 1747103 pod_ready.go:83] waiting for pod "kube-controller-manager-enable-default-cni-892179" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:55:19.065381 1747103 pod_ready.go:94] pod "kube-controller-manager-enable-default-cni-892179" is "Ready"
	I1222 01:55:19.065411 1747103 pod_ready.go:86] duration metric: took 180.445665ms for pod "kube-controller-manager-enable-default-cni-892179" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:55:19.265645 1747103 pod_ready.go:83] waiting for pod "kube-proxy-bs8zg" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:55:19.665983 1747103 pod_ready.go:94] pod "kube-proxy-bs8zg" is "Ready"
	I1222 01:55:19.666018 1747103 pod_ready.go:86] duration metric: took 400.347938ms for pod "kube-proxy-bs8zg" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:55:19.866312 1747103 pod_ready.go:83] waiting for pod "kube-scheduler-enable-default-cni-892179" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:55:20.265785 1747103 pod_ready.go:94] pod "kube-scheduler-enable-default-cni-892179" is "Ready"
	I1222 01:55:20.265815 1747103 pod_ready.go:86] duration metric: took 399.471162ms for pod "kube-scheduler-enable-default-cni-892179" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 01:55:20.265829 1747103 pod_ready.go:40] duration metric: took 41.916080888s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1222 01:55:20.319270 1747103 start.go:625] kubectl: 1.33.2, cluster: 1.34.3 (minor skew: 1)
	I1222 01:55:20.324610 1747103 out.go:179] * Done! kubectl is now configured to use "enable-default-cni-892179" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 22 01:36:29 no-preload-154186 containerd[555]: time="2025-12-22T01:36:29.325328676Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 22 01:36:29 no-preload-154186 containerd[555]: time="2025-12-22T01:36:29.325350666Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 22 01:36:29 no-preload-154186 containerd[555]: time="2025-12-22T01:36:29.325391659Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 22 01:36:29 no-preload-154186 containerd[555]: time="2025-12-22T01:36:29.325406576Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 22 01:36:29 no-preload-154186 containerd[555]: time="2025-12-22T01:36:29.325416947Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 22 01:36:29 no-preload-154186 containerd[555]: time="2025-12-22T01:36:29.325430043Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 22 01:36:29 no-preload-154186 containerd[555]: time="2025-12-22T01:36:29.325439348Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 22 01:36:29 no-preload-154186 containerd[555]: time="2025-12-22T01:36:29.325458006Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 22 01:36:29 no-preload-154186 containerd[555]: time="2025-12-22T01:36:29.325472513Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 22 01:36:29 no-preload-154186 containerd[555]: time="2025-12-22T01:36:29.325505277Z" level=info msg="Connect containerd service"
	Dec 22 01:36:29 no-preload-154186 containerd[555]: time="2025-12-22T01:36:29.325765062Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 22 01:36:29 no-preload-154186 containerd[555]: time="2025-12-22T01:36:29.326389970Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 22 01:36:29 no-preload-154186 containerd[555]: time="2025-12-22T01:36:29.344703104Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 22 01:36:29 no-preload-154186 containerd[555]: time="2025-12-22T01:36:29.344887163Z" level=info msg="Start subscribing containerd event"
	Dec 22 01:36:29 no-preload-154186 containerd[555]: time="2025-12-22T01:36:29.345002741Z" level=info msg="Start recovering state"
	Dec 22 01:36:29 no-preload-154186 containerd[555]: time="2025-12-22T01:36:29.344959344Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 22 01:36:29 no-preload-154186 containerd[555]: time="2025-12-22T01:36:29.364047164Z" level=info msg="Start event monitor"
	Dec 22 01:36:29 no-preload-154186 containerd[555]: time="2025-12-22T01:36:29.364252630Z" level=info msg="Start cni network conf syncer for default"
	Dec 22 01:36:29 no-preload-154186 containerd[555]: time="2025-12-22T01:36:29.364336438Z" level=info msg="Start streaming server"
	Dec 22 01:36:29 no-preload-154186 containerd[555]: time="2025-12-22T01:36:29.364415274Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 22 01:36:29 no-preload-154186 containerd[555]: time="2025-12-22T01:36:29.364479832Z" level=info msg="runtime interface starting up..."
	Dec 22 01:36:29 no-preload-154186 containerd[555]: time="2025-12-22T01:36:29.364536588Z" level=info msg="starting plugins..."
	Dec 22 01:36:29 no-preload-154186 containerd[555]: time="2025-12-22T01:36:29.364617967Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 22 01:36:29 no-preload-154186 containerd[555]: time="2025-12-22T01:36:29.364818765Z" level=info msg="containerd successfully booted in 0.063428s"
	Dec 22 01:36:29 no-preload-154186 systemd[1]: Started containerd.service - containerd container runtime.
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 01:56:03.437879   10244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:56:03.438643   10244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:56:03.440218   10244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:56:03.440526   10244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 01:56:03.442070   10244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec22 00:03] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 01:56:03 up 1 day,  8:38,  0 user,  load average: 2.12, 2.15, 1.79
	Linux no-preload-154186 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 22 01:56:00 no-preload-154186 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:56:00 no-preload-154186 kubelet[10107]: E1222 01:56:00.546594   10107 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 01:56:00 no-preload-154186 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 01:56:00 no-preload-154186 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 01:56:01 no-preload-154186 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1559.
	Dec 22 01:56:01 no-preload-154186 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:56:01 no-preload-154186 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:56:01 no-preload-154186 kubelet[10113]: E1222 01:56:01.282396   10113 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 01:56:01 no-preload-154186 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 01:56:01 no-preload-154186 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 01:56:01 no-preload-154186 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1560.
	Dec 22 01:56:01 no-preload-154186 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:56:01 no-preload-154186 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:56:02 no-preload-154186 kubelet[10118]: E1222 01:56:02.033404   10118 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 01:56:02 no-preload-154186 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 01:56:02 no-preload-154186 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 01:56:02 no-preload-154186 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1561.
	Dec 22 01:56:02 no-preload-154186 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:56:02 no-preload-154186 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:56:02 no-preload-154186 kubelet[10152]: E1222 01:56:02.773772   10152 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 01:56:02 no-preload-154186 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 01:56:02 no-preload-154186 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 01:56:03 no-preload-154186 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1562.
	Dec 22 01:56:03 no-preload-154186 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 01:56:03 no-preload-154186 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-154186 -n no-preload-154186
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-154186 -n no-preload-154186: exit status 2 (332.650645ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "no-preload-154186" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (267.57s)

                                                
                                    

Test pass (349/421)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 6.37
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.1
9 TestDownloadOnly/v1.28.0/DeleteAll 0.24
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.34.3/json-events 5.25
13 TestDownloadOnly/v1.34.3/preload-exists 0
17 TestDownloadOnly/v1.34.3/LogsDuration 0.09
18 TestDownloadOnly/v1.34.3/DeleteAll 0.22
19 TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds 0.14
21 TestDownloadOnly/v1.35.0-rc.1/json-events 4.11
22 TestDownloadOnly/v1.35.0-rc.1/preload-exists 0
26 TestDownloadOnly/v1.35.0-rc.1/LogsDuration 0.09
27 TestDownloadOnly/v1.35.0-rc.1/DeleteAll 0.22
28 TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds 0.15
30 TestBinaryMirror 0.61
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
36 TestAddons/Setup 126.3
38 TestAddons/serial/Volcano 42.99
40 TestAddons/serial/GCPAuth/Namespaces 0.19
41 TestAddons/serial/GCPAuth/FakeCredentials 9.85
44 TestAddons/parallel/Registry 15.39
45 TestAddons/parallel/RegistryCreds 0.72
46 TestAddons/parallel/Ingress 17.28
47 TestAddons/parallel/InspektorGadget 11.77
48 TestAddons/parallel/MetricsServer 5.92
50 TestAddons/parallel/CSI 48.06
51 TestAddons/parallel/Headlamp 11.24
52 TestAddons/parallel/CloudSpanner 6.61
53 TestAddons/parallel/LocalPath 53.37
54 TestAddons/parallel/NvidiaDevicePlugin 6.56
55 TestAddons/parallel/Yakd 10.88
57 TestAddons/StoppedEnableDisable 12.48
58 TestCertOptions 34.24
59 TestCertExpiration 222.09
61 TestForceSystemdFlag 38.13
62 TestForceSystemdEnv 39.08
63 TestDockerEnvContainerd 48.15
67 TestErrorSpam/setup 32.63
68 TestErrorSpam/start 0.85
69 TestErrorSpam/status 1.19
70 TestErrorSpam/pause 1.68
71 TestErrorSpam/unpause 1.85
72 TestErrorSpam/stop 1.61
75 TestFunctional/serial/CopySyncFile 0.01
76 TestFunctional/serial/StartWithProxy 49.8
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 7.08
79 TestFunctional/serial/KubeContext 0.08
80 TestFunctional/serial/KubectlGetPods 0.09
83 TestFunctional/serial/CacheCmd/cache/add_remote 3.8
84 TestFunctional/serial/CacheCmd/cache/add_local 1.35
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
86 TestFunctional/serial/CacheCmd/cache/list 0.06
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.32
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.89
89 TestFunctional/serial/CacheCmd/cache/delete 0.11
90 TestFunctional/serial/MinikubeKubectlCmd 0.14
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.15
92 TestFunctional/serial/ExtraConfig 47.45
93 TestFunctional/serial/ComponentHealth 0.1
94 TestFunctional/serial/LogsCmd 1.45
95 TestFunctional/serial/LogsFileCmd 1.52
96 TestFunctional/serial/InvalidService 4.34
98 TestFunctional/parallel/ConfigCmd 0.5
99 TestFunctional/parallel/DashboardCmd 9.01
100 TestFunctional/parallel/DryRun 0.53
101 TestFunctional/parallel/InternationalLanguage 0.27
102 TestFunctional/parallel/StatusCmd 1.16
106 TestFunctional/parallel/ServiceCmdConnect 7.78
107 TestFunctional/parallel/AddonsCmd 0.14
108 TestFunctional/parallel/PersistentVolumeClaim 20.57
110 TestFunctional/parallel/SSHCmd 0.55
111 TestFunctional/parallel/CpCmd 2.09
113 TestFunctional/parallel/FileSync 0.35
114 TestFunctional/parallel/CertSync 2.23
118 TestFunctional/parallel/NodeLabels 0.12
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.78
122 TestFunctional/parallel/License 0.34
123 TestFunctional/parallel/Version/short 0.08
124 TestFunctional/parallel/Version/components 1.57
125 TestFunctional/parallel/ImageCommands/ImageListShort 0.25
126 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
127 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
128 TestFunctional/parallel/ImageCommands/ImageListYaml 0.25
129 TestFunctional/parallel/ImageCommands/ImageBuild 4.02
130 TestFunctional/parallel/ImageCommands/Setup 0.69
131 TestFunctional/parallel/UpdateContextCmd/no_changes 0.22
132 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.23
133 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.19
134 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.44
135 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.42
136 TestFunctional/parallel/ServiceCmd/DeployApp 8.3
137 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.39
138 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.36
139 TestFunctional/parallel/ImageCommands/ImageRemove 0.52
140 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.64
141 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.41
143 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.54
144 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
146 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.32
147 TestFunctional/parallel/ServiceCmd/List 0.34
148 TestFunctional/parallel/ServiceCmd/JSONOutput 0.35
149 TestFunctional/parallel/ServiceCmd/HTTPS 0.4
150 TestFunctional/parallel/ServiceCmd/Format 0.37
151 TestFunctional/parallel/ServiceCmd/URL 0.39
152 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
153 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
157 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
158 TestFunctional/parallel/ProfileCmd/profile_not_create 0.54
159 TestFunctional/parallel/ProfileCmd/profile_list 0.51
160 TestFunctional/parallel/ProfileCmd/profile_json_output 0.43
161 TestFunctional/parallel/MountCmd/any-port 8.05
162 TestFunctional/parallel/MountCmd/specific-port 2.24
163 TestFunctional/parallel/MountCmd/VerifyCleanup 1.7
164 TestFunctional/delete_echo-server_images 0.04
165 TestFunctional/delete_my-image_image 0.02
166 TestFunctional/delete_minikube_cached_images 0.02
170 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile 0
172 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog 0
174 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext 0.06
178 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote 3.27
179 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local 1.13
180 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete 0.06
181 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list 0.05
182 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node 0.29
183 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload 1.82
184 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete 0.14
189 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd 0.95
190 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd 0.97
193 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd 0.57
195 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun 0.43
196 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage 0.22
202 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd 0.15
205 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd 0.71
206 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd 2.25
208 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync 0.29
209 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync 1.66
215 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled 0.57
217 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License 0.29
220 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel 0
227 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel 0.11
234 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create 0.43
235 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list 0.43
236 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output 0.38
238 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port 1.82
239 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup 2.01
240 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short 0.06
241 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components 0.51
242 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort 0.22
243 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable 0.22
244 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson 0.24
245 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml 0.22
246 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild 3.61
247 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup 0.26
248 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon 1.12
249 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon 1.07
250 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon 1.32
251 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile 0.32
252 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove 0.47
253 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile 0.68
254 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon 0.35
255 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes 0.15
256 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster 0.15
257 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters 0.15
258 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images 0.04
259 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image 0.02
260 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images 0.02
264 TestMultiControlPlane/serial/StartCluster 143.81
265 TestMultiControlPlane/serial/DeployApp 7.36
266 TestMultiControlPlane/serial/PingHostFromPods 1.66
267 TestMultiControlPlane/serial/AddWorkerNode 30.57
268 TestMultiControlPlane/serial/NodeLabels 0.11
269 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.05
270 TestMultiControlPlane/serial/CopyFile 20.15
271 TestMultiControlPlane/serial/StopSecondaryNode 13.01
272 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.82
273 TestMultiControlPlane/serial/RestartSecondaryNode 13.35
274 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.48
275 TestMultiControlPlane/serial/RestartClusterKeepsNodes 98.08
276 TestMultiControlPlane/serial/DeleteSecondaryNode 10.96
277 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.82
278 TestMultiControlPlane/serial/StopCluster 36.63
279 TestMultiControlPlane/serial/RestartCluster 61.88
280 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.78
281 TestMultiControlPlane/serial/AddSecondaryNode 85.04
282 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.13
287 TestJSONOutput/start/Command 51.35
288 TestJSONOutput/start/Audit 0
290 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
291 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
293 TestJSONOutput/pause/Command 0.74
294 TestJSONOutput/pause/Audit 0
296 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
297 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
299 TestJSONOutput/unpause/Command 0.65
300 TestJSONOutput/unpause/Audit 0
302 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
303 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
305 TestJSONOutput/stop/Command 6.05
306 TestJSONOutput/stop/Audit 0
308 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
309 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
310 TestErrorJSONOutput 0.25
312 TestKicCustomNetwork/create_custom_network 35.19
313 TestKicCustomNetwork/use_default_bridge_network 36.62
314 TestKicExistingNetwork 35.26
315 TestKicCustomSubnet 38.32
316 TestKicStaticIP 36.75
317 TestMainNoArgs 0.05
318 TestMinikubeProfile 71.49
321 TestMountStart/serial/StartWithMountFirst 8.64
322 TestMountStart/serial/VerifyMountFirst 0.27
323 TestMountStart/serial/StartWithMountSecond 5.86
324 TestMountStart/serial/VerifyMountSecond 0.28
325 TestMountStart/serial/DeleteFirst 1.69
326 TestMountStart/serial/VerifyMountPostDelete 0.26
327 TestMountStart/serial/Stop 1.3
328 TestMountStart/serial/RestartStopped 7.47
329 TestMountStart/serial/VerifyMountPostStop 0.27
332 TestMultiNode/serial/FreshStart2Nodes 78.84
333 TestMultiNode/serial/DeployApp2Nodes 6.04
334 TestMultiNode/serial/PingHostFrom2Pods 1.02
335 TestMultiNode/serial/AddNode 29.37
336 TestMultiNode/serial/MultiNodeLabels 0.09
337 TestMultiNode/serial/ProfileList 0.73
338 TestMultiNode/serial/CopyFile 10.52
339 TestMultiNode/serial/StopNode 2.42
340 TestMultiNode/serial/StartAfterStop 8.07
341 TestMultiNode/serial/RestartKeepsNodes 77.92
342 TestMultiNode/serial/DeleteNode 5.71
343 TestMultiNode/serial/StopMultiNode 24.15
344 TestMultiNode/serial/RestartMultiNode 49.28
345 TestMultiNode/serial/ValidateNameConflict 36.05
352 TestScheduledStopUnix 109.9
355 TestInsufficientStorage 9.99
356 TestRunningBinaryUpgrade 313.13
359 TestMissingContainerUpgrade 139.26
361 TestPause/serial/Start 62.2
362 TestPause/serial/SecondStartNoReconfiguration 7.06
363 TestPause/serial/Pause 0.7
364 TestPause/serial/VerifyStatus 0.32
365 TestPause/serial/Unpause 0.63
366 TestPause/serial/PauseAgain 0.9
367 TestPause/serial/DeletePaused 2.79
368 TestPause/serial/VerifyDeletedResources 0.19
369 TestStoppedBinaryUpgrade/Setup 1.07
370 TestStoppedBinaryUpgrade/Upgrade 54.22
371 TestStoppedBinaryUpgrade/MinikubeLogs 2.58
379 TestPreload/Start-NoPreload-PullImage 66.81
380 TestPreload/Restart-With-Preload-Check-User-Image 48.71
383 TestNoKubernetes/serial/StartNoK8sWithVersion 0.12
384 TestNoKubernetes/serial/StartWithK8s 32.74
385 TestNoKubernetes/serial/StartWithStopK8s 16.04
386 TestNoKubernetes/serial/Start 7.71
387 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
388 TestNoKubernetes/serial/VerifyK8sNotRunning 0.27
389 TestNoKubernetes/serial/ProfileList 1.07
390 TestNoKubernetes/serial/Stop 1.31
391 TestNoKubernetes/serial/StartNoArgs 6.56
392 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.3
400 TestNetworkPlugins/group/false 3.78
405 TestStartStop/group/old-k8s-version/serial/FirstStart 70.8
407 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 57.21
408 TestStartStop/group/old-k8s-version/serial/DeployApp 10.54
409 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.43
410 TestStartStop/group/old-k8s-version/serial/Stop 12.33
411 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.34
412 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.05
413 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.36
414 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.25
415 TestStartStop/group/old-k8s-version/serial/SecondStart 51.23
416 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.33
417 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 53.03
418 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
419 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.11
420 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.26
421 TestStartStop/group/old-k8s-version/serial/Pause 3.62
422 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
424 TestStartStop/group/embed-certs/serial/FirstStart 69.66
425 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.19
426 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.33
427 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.63
430 TestStartStop/group/embed-certs/serial/DeployApp 9.37
431 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.11
432 TestStartStop/group/embed-certs/serial/Stop 12.18
433 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
434 TestStartStop/group/embed-certs/serial/SecondStart 52.93
435 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.02
436 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
437 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
438 TestStartStop/group/embed-certs/serial/Pause 3.09
443 TestStartStop/group/no-preload/serial/Stop 1.32
444 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
446 TestStartStop/group/newest-cni/serial/DeployApp 0
448 TestStartStop/group/newest-cni/serial/Stop 1.3
449 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
452 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
453 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
454 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
456 TestPreload/PreloadSrc/gcs 6.93
457 TestPreload/PreloadSrc/github 7.67
458 TestPreload/PreloadSrc/gcs-cached 0.9
459 TestNetworkPlugins/group/auto/Start 46.59
460 TestNetworkPlugins/group/auto/KubeletFlags 0.35
461 TestNetworkPlugins/group/auto/NetCatPod 9.26
462 TestNetworkPlugins/group/auto/DNS 0.19
463 TestNetworkPlugins/group/auto/Localhost 0.16
464 TestNetworkPlugins/group/auto/HairPin 0.15
465 TestNetworkPlugins/group/flannel/Start 54.11
466 TestNetworkPlugins/group/flannel/ControllerPod 6.01
467 TestNetworkPlugins/group/flannel/KubeletFlags 0.31
468 TestNetworkPlugins/group/flannel/NetCatPod 10.26
469 TestNetworkPlugins/group/flannel/DNS 0.19
470 TestNetworkPlugins/group/flannel/Localhost 0.14
471 TestNetworkPlugins/group/flannel/HairPin 0.15
472 TestNetworkPlugins/group/calico/Start 63.95
473 TestNetworkPlugins/group/calico/ControllerPod 6.01
474 TestNetworkPlugins/group/calico/KubeletFlags 0.31
475 TestNetworkPlugins/group/calico/NetCatPod 10.26
476 TestNetworkPlugins/group/calico/DNS 0.2
477 TestNetworkPlugins/group/calico/Localhost 0.19
478 TestNetworkPlugins/group/calico/HairPin 0.17
479 TestNetworkPlugins/group/custom-flannel/Start 58.77
480 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.31
481 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.3
482 TestNetworkPlugins/group/custom-flannel/DNS 0.24
483 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
484 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
485 TestNetworkPlugins/group/kindnet/Start 53.57
487 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
488 TestNetworkPlugins/group/kindnet/KubeletFlags 0.3
489 TestNetworkPlugins/group/kindnet/NetCatPod 8.26
490 TestNetworkPlugins/group/kindnet/DNS 0.17
491 TestNetworkPlugins/group/kindnet/Localhost 0.18
492 TestNetworkPlugins/group/kindnet/HairPin 0.15
493 TestNetworkPlugins/group/bridge/Start 43.86
494 TestNetworkPlugins/group/bridge/KubeletFlags 0.32
495 TestNetworkPlugins/group/bridge/NetCatPod 9.31
496 TestNetworkPlugins/group/bridge/DNS 0.17
497 TestNetworkPlugins/group/bridge/Localhost 0.15
498 TestNetworkPlugins/group/bridge/HairPin 0.15
499 TestNetworkPlugins/group/enable-default-cni/Start 79.01
500 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.29
501 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.27
502 TestNetworkPlugins/group/enable-default-cni/DNS 0.27
503 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
504 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
x
+
TestDownloadOnly/v1.28.0/json-events (6.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-400336 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-400336 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (6.366691214s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (6.37s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1222 00:04:10.324631 1396864 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
I1222 00:04:10.324716 1396864 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-400336
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-400336: exit status 85 (98.1634ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-400336 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-400336 │ jenkins │ v1.37.0 │ 22 Dec 25 00:04 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/22 00:04:04
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1222 00:04:04.003138 1396869 out.go:360] Setting OutFile to fd 1 ...
	I1222 00:04:04.003342 1396869 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:04:04.003377 1396869 out.go:374] Setting ErrFile to fd 2...
	I1222 00:04:04.003416 1396869 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:04:04.003716 1396869 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1395000/.minikube/bin
	W1222 00:04:04.003907 1396869 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22179-1395000/.minikube/config/config.json: open /home/jenkins/minikube-integration/22179-1395000/.minikube/config/config.json: no such file or directory
	I1222 00:04:04.004443 1396869 out.go:368] Setting JSON to true
	I1222 00:04:04.005453 1396869 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":110797,"bootTime":1766251047,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1222 00:04:04.005581 1396869 start.go:143] virtualization:  
	I1222 00:04:04.016215 1396869 out.go:99] [download-only-400336] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1222 00:04:04.016457 1396869 preload.go:369] Failed to list preload files: open /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/preloaded-tarball: no such file or directory
	I1222 00:04:04.016613 1396869 notify.go:221] Checking for updates...
	I1222 00:04:04.020675 1396869 out.go:171] MINIKUBE_LOCATION=22179
	I1222 00:04:04.023940 1396869 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 00:04:04.026961 1396869 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22179-1395000/kubeconfig
	I1222 00:04:04.029974 1396869 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1395000/.minikube
	I1222 00:04:04.033180 1396869 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1222 00:04:04.039238 1396869 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1222 00:04:04.039552 1396869 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 00:04:04.068659 1396869 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1222 00:04:04.068802 1396869 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 00:04:04.128819 1396869 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-22 00:04:04.119543073 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 00:04:04.128924 1396869 docker.go:319] overlay module found
	I1222 00:04:04.132178 1396869 out.go:99] Using the docker driver based on user configuration
	I1222 00:04:04.132215 1396869 start.go:309] selected driver: docker
	I1222 00:04:04.132223 1396869 start.go:928] validating driver "docker" against <nil>
	I1222 00:04:04.132341 1396869 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 00:04:04.186141 1396869 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-22 00:04:04.176902153 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 00:04:04.186309 1396869 start_flags.go:329] no existing cluster config was found, will generate one from the flags 
	I1222 00:04:04.186593 1396869 start_flags.go:413] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1222 00:04:04.186751 1396869 start_flags.go:977] Wait components to verify : map[apiserver:true system_pods:true]
	I1222 00:04:04.189868 1396869 out.go:171] Using Docker driver with root privileges
	I1222 00:04:04.192893 1396869 cni.go:84] Creating CNI manager for ""
	I1222 00:04:04.192967 1396869 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1222 00:04:04.192984 1396869 start_flags.go:338] Found "CNI" CNI - setting NetworkPlugin=cni
	I1222 00:04:04.193069 1396869 start.go:353] cluster config:
	{Name:download-only-400336 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-400336 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 00:04:04.196287 1396869 out.go:99] Starting "download-only-400336" primary control-plane node in "download-only-400336" cluster
	I1222 00:04:04.196312 1396869 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1222 00:04:04.199247 1396869 out.go:99] Pulling base image v0.0.48-1766219634-22260 ...
	I1222 00:04:04.199292 1396869 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1222 00:04:04.199478 1396869 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1222 00:04:04.215667 1396869 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 to local cache
	I1222 00:04:04.215843 1396869 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local cache directory
	I1222 00:04:04.215945 1396869 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 to local cache
	I1222 00:04:04.251196 1396869 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1222 00:04:04.251219 1396869 cache.go:65] Caching tarball of preloaded images
	I1222 00:04:04.251409 1396869 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1222 00:04:04.254885 1396869 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1222 00:04:04.254931 1396869 preload.go:269] Downloading preload from https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1222 00:04:04.254939 1396869 preload.go:333] getting checksum for preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4 from gcs api...
	I1222 00:04:04.336493 1396869 preload.go:310] Got checksum from GCS API "38d7f581f2fa4226c8af2c9106b982b7"
	I1222 00:04:04.336627 1396869 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:38d7f581f2fa4226c8af2c9106b982b7 -> /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-400336 host does not exist
	  To start a cluster, run: "minikube start -p download-only-400336"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-400336
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/json-events (5.25s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-866072 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-866072 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (5.246334662s)
--- PASS: TestDownloadOnly/v1.34.3/json-events (5.25s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/preload-exists
I1222 00:04:16.055580 1396864 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime containerd
I1222 00:04:16.055618 1396864 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-866072
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-866072: exit status 85 (92.040734ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-400336 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-400336 │ jenkins │ v1.37.0 │ 22 Dec 25 00:04 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                 │ minikube             │ jenkins │ v1.37.0 │ 22 Dec 25 00:04 UTC │ 22 Dec 25 00:04 UTC │
	│ delete  │ -p download-only-400336                                                                                                                                                               │ download-only-400336 │ jenkins │ v1.37.0 │ 22 Dec 25 00:04 UTC │ 22 Dec 25 00:04 UTC │
	│ start   │ -o=json --download-only -p download-only-866072 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-866072 │ jenkins │ v1.37.0 │ 22 Dec 25 00:04 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/22 00:04:10
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1222 00:04:10.853128 1397065 out.go:360] Setting OutFile to fd 1 ...
	I1222 00:04:10.853247 1397065 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:04:10.853265 1397065 out.go:374] Setting ErrFile to fd 2...
	I1222 00:04:10.853270 1397065 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:04:10.853538 1397065 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1395000/.minikube/bin
	I1222 00:04:10.853949 1397065 out.go:368] Setting JSON to true
	I1222 00:04:10.854823 1397065 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":110804,"bootTime":1766251047,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1222 00:04:10.854891 1397065 start.go:143] virtualization:  
	I1222 00:04:10.858351 1397065 out.go:99] [download-only-866072] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1222 00:04:10.858570 1397065 notify.go:221] Checking for updates...
	I1222 00:04:10.861502 1397065 out.go:171] MINIKUBE_LOCATION=22179
	I1222 00:04:10.864678 1397065 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 00:04:10.867704 1397065 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22179-1395000/kubeconfig
	I1222 00:04:10.870657 1397065 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1395000/.minikube
	I1222 00:04:10.873505 1397065 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1222 00:04:10.879197 1397065 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1222 00:04:10.879577 1397065 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 00:04:10.915242 1397065 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1222 00:04:10.915378 1397065 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 00:04:10.974444 1397065 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:49 SystemTime:2025-12-22 00:04:10.964333107 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 00:04:10.974559 1397065 docker.go:319] overlay module found
	I1222 00:04:10.977437 1397065 out.go:99] Using the docker driver based on user configuration
	I1222 00:04:10.977482 1397065 start.go:309] selected driver: docker
	I1222 00:04:10.977491 1397065 start.go:928] validating driver "docker" against <nil>
	I1222 00:04:10.977619 1397065 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 00:04:11.035143 1397065 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:49 SystemTime:2025-12-22 00:04:11.026217258 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 00:04:11.035293 1397065 start_flags.go:329] no existing cluster config was found, will generate one from the flags 
	I1222 00:04:11.035575 1397065 start_flags.go:413] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1222 00:04:11.035730 1397065 start_flags.go:977] Wait components to verify : map[apiserver:true system_pods:true]
	I1222 00:04:11.038785 1397065 out.go:171] Using Docker driver with root privileges
	I1222 00:04:11.041534 1397065 cni.go:84] Creating CNI manager for ""
	I1222 00:04:11.041603 1397065 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1222 00:04:11.041620 1397065 start_flags.go:338] Found "CNI" CNI - setting NetworkPlugin=cni
	I1222 00:04:11.041698 1397065 start.go:353] cluster config:
	{Name:download-only-866072 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:download-only-866072 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 00:04:11.044727 1397065 out.go:99] Starting "download-only-866072" primary control-plane node in "download-only-866072" cluster
	I1222 00:04:11.044753 1397065 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1222 00:04:11.047767 1397065 out.go:99] Pulling base image v0.0.48-1766219634-22260 ...
	I1222 00:04:11.047813 1397065 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime containerd
	I1222 00:04:11.048016 1397065 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1222 00:04:11.063663 1397065 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 to local cache
	I1222 00:04:11.063811 1397065 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local cache directory
	I1222 00:04:11.063834 1397065 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local cache directory, skipping pull
	I1222 00:04:11.063839 1397065 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 exists in cache, skipping pull
	I1222 00:04:11.063846 1397065 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 as a tarball
	I1222 00:04:11.104064 1397065 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.3/preloaded-images-k8s-v18-v1.34.3-containerd-overlay2-arm64.tar.lz4
	I1222 00:04:11.104095 1397065 cache.go:65] Caching tarball of preloaded images
	I1222 00:04:11.104280 1397065 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime containerd
	I1222 00:04:11.107418 1397065 out.go:99] Downloading Kubernetes v1.34.3 preload ...
	I1222 00:04:11.107453 1397065 preload.go:269] Downloading preload from https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.3/preloaded-images-k8s-v18-v1.34.3-containerd-overlay2-arm64.tar.lz4
	I1222 00:04:11.107462 1397065 preload.go:333] getting checksum for preloaded-images-k8s-v18-v1.34.3-containerd-overlay2-arm64.tar.lz4 from gcs api...
	I1222 00:04:11.196050 1397065 preload.go:310] Got checksum from GCS API "cec854b4ba05b56d256f7c601add2b98"
	I1222 00:04:11.196104 1397065 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.3/preloaded-images-k8s-v18-v1.34.3-containerd-overlay2-arm64.tar.lz4?checksum=md5:cec854b4ba05b56d256f7c601add2b98 -> /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-containerd-overlay2-arm64.tar.lz4
	I1222 00:04:15.161219 1397065 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on containerd
	I1222 00:04:15.161640 1397065 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/download-only-866072/config.json ...
	I1222 00:04:15.161683 1397065 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/download-only-866072/config.json: {Name:mk6f956dec80a70274fd07a9ecbdfe7d03fd1e8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 00:04:15.161873 1397065 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime containerd
	I1222 00:04:15.162037 1397065 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/linux/arm64/v1.34.3/kubectl
	
	
	* The control-plane node download-only-866072 host does not exist
	  To start a cluster, run: "minikube start -p download-only-866072"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.3/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.3/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-866072
--- PASS: TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/json-events (4.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-416693 --force --alsologtostderr --kubernetes-version=v1.35.0-rc.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-416693 --force --alsologtostderr --kubernetes-version=v1.35.0-rc.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (4.111623208s)
--- PASS: TestDownloadOnly/v1.35.0-rc.1/json-events (4.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/preload-exists
I1222 00:04:20.625366 1396864 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
I1222 00:04:20.625400 1396864 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0-rc.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-416693
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-416693: exit status 85 (88.60133ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                            ARGS                                                                                            │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-400336 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd      │ download-only-400336 │ jenkins │ v1.37.0 │ 22 Dec 25 00:04 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                      │ minikube             │ jenkins │ v1.37.0 │ 22 Dec 25 00:04 UTC │ 22 Dec 25 00:04 UTC │
	│ delete  │ -p download-only-400336                                                                                                                                                                    │ download-only-400336 │ jenkins │ v1.37.0 │ 22 Dec 25 00:04 UTC │ 22 Dec 25 00:04 UTC │
	│ start   │ -o=json --download-only -p download-only-866072 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=containerd --driver=docker  --container-runtime=containerd      │ download-only-866072 │ jenkins │ v1.37.0 │ 22 Dec 25 00:04 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                      │ minikube             │ jenkins │ v1.37.0 │ 22 Dec 25 00:04 UTC │ 22 Dec 25 00:04 UTC │
	│ delete  │ -p download-only-866072                                                                                                                                                                    │ download-only-866072 │ jenkins │ v1.37.0 │ 22 Dec 25 00:04 UTC │ 22 Dec 25 00:04 UTC │
	│ start   │ -o=json --download-only -p download-only-416693 --force --alsologtostderr --kubernetes-version=v1.35.0-rc.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-416693 │ jenkins │ v1.37.0 │ 22 Dec 25 00:04 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/22 00:04:16
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1222 00:04:16.560286 1397265 out.go:360] Setting OutFile to fd 1 ...
	I1222 00:04:16.560460 1397265 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:04:16.560478 1397265 out.go:374] Setting ErrFile to fd 2...
	I1222 00:04:16.560485 1397265 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:04:16.560745 1397265 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1395000/.minikube/bin
	I1222 00:04:16.561175 1397265 out.go:368] Setting JSON to true
	I1222 00:04:16.562029 1397265 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":110809,"bootTime":1766251047,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1222 00:04:16.562138 1397265 start.go:143] virtualization:  
	I1222 00:04:16.565703 1397265 out.go:99] [download-only-416693] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1222 00:04:16.565978 1397265 notify.go:221] Checking for updates...
	I1222 00:04:16.569069 1397265 out.go:171] MINIKUBE_LOCATION=22179
	I1222 00:04:16.572129 1397265 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 00:04:16.575124 1397265 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22179-1395000/kubeconfig
	I1222 00:04:16.578068 1397265 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1395000/.minikube
	I1222 00:04:16.581070 1397265 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1222 00:04:16.586662 1397265 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1222 00:04:16.587005 1397265 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 00:04:16.615354 1397265 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1222 00:04:16.615476 1397265 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 00:04:16.682678 1397265 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-22 00:04:16.673407789 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 00:04:16.682801 1397265 docker.go:319] overlay module found
	I1222 00:04:16.685928 1397265 out.go:99] Using the docker driver based on user configuration
	I1222 00:04:16.685970 1397265 start.go:309] selected driver: docker
	I1222 00:04:16.685988 1397265 start.go:928] validating driver "docker" against <nil>
	I1222 00:04:16.686172 1397265 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 00:04:16.749730 1397265 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-22 00:04:16.740808455 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 00:04:16.749923 1397265 start_flags.go:329] no existing cluster config was found, will generate one from the flags 
	I1222 00:04:16.750220 1397265 start_flags.go:413] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1222 00:04:16.750372 1397265 start_flags.go:977] Wait components to verify : map[apiserver:true system_pods:true]
	I1222 00:04:16.753502 1397265 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-416693 host does not exist
	  To start a cluster, run: "minikube start -p download-only-416693"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-rc.1/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-rc.1/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-416693
--- PASS: TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.61s)

                                                
                                                
=== RUN   TestBinaryMirror
I1222 00:04:21.990265 1396864 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-728113 --alsologtostderr --binary-mirror http://127.0.0.1:46215 --driver=docker  --container-runtime=containerd
helpers_test.go:176: Cleaning up "binary-mirror-728113" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-728113
--- PASS: TestBinaryMirror (0.61s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-984861
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-984861: exit status 85 (80.671294ms)

                                                
                                                
-- stdout --
	* Profile "addons-984861" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-984861"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-984861
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-984861: exit status 85 (82.190576ms)

                                                
                                                
-- stdout --
	* Profile "addons-984861" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-984861"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (126.3s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-984861 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-984861 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m6.303823584s)
--- PASS: TestAddons/Setup (126.30s)

                                                
                                    
x
+
TestAddons/serial/Volcano (42.99s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:870: volcano-scheduler stabilized in 59.942874ms
addons_test.go:878: volcano-admission stabilized in 60.769208ms
addons_test.go:886: volcano-controller stabilized in 60.813958ms
addons_test.go:892: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-scheduler-76c996c8bf-bw6j5" [8118b9ce-f0ba-4925-9326-3f0e2111c470] Running
addons_test.go:892: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.005023593s
addons_test.go:896: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-admission-6c447bd768-vpvm7" [6eed3ce7-6f49-4be4-a141-069b37b66697] Pending / Ready:ContainersNotReady (containers with unready status: [admission]) / ContainersReady:ContainersNotReady (containers with unready status: [admission])
helpers_test.go:353: "volcano-admission-6c447bd768-vpvm7" [6eed3ce7-6f49-4be4-a141-069b37b66697] Running
addons_test.go:896: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 7.004377784s
addons_test.go:900: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-controllers-6fd4f85cb8-mmllp" [7410f945-a726-4fda-9073-b9d37ff9453c] Running
addons_test.go:900: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 6.003355542s
addons_test.go:905: (dbg) Run:  kubectl --context addons-984861 delete -n volcano-system job volcano-admission-init
addons_test.go:911: (dbg) Run:  kubectl --context addons-984861 create -f testdata/vcjob.yaml
addons_test.go:919: (dbg) Run:  kubectl --context addons-984861 get vcjob -n my-volcano
addons_test.go:937: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:353: "test-job-nginx-0" [11c2a854-c744-403b-925c-14028ccce6df] Pending
helpers_test.go:353: "test-job-nginx-0" [11c2a854-c744-403b-925c-14028ccce6df] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "test-job-nginx-0" [11c2a854-c744-403b-925c-14028ccce6df] Running
addons_test.go:937: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 11.004068845s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-984861 addons disable volcano --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-984861 addons disable volcano --alsologtostderr -v=1: (12.362814917s)
--- PASS: TestAddons/serial/Volcano (42.99s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-984861 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-984861 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.85s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-984861 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-984861 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [fb4eabdb-a564-4143-a1fe-d99de1bb84b8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [fb4eabdb-a564-4143-a1fe-d99de1bb84b8] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.003427577s
addons_test.go:696: (dbg) Run:  kubectl --context addons-984861 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-984861 describe sa gcp-auth-test
addons_test.go:722: (dbg) Run:  kubectl --context addons-984861 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:746: (dbg) Run:  kubectl --context addons-984861 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.85s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.39s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 5.034707ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-6b586f9694-rsc7v" [98a98243-a89c-4a7e-bc67-eb9852159d4b] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003631168s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-52vwx" [47f3b308-9e95-45d7-9514-3f5295bfcdc5] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.005672979s
addons_test.go:394: (dbg) Run:  kubectl --context addons-984861 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-984861 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-984861 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.373201421s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-arm64 -p addons-984861 ip
2025/12/22 00:07:45 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-984861 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.39s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.72s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 3.205753ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-984861
addons_test.go:334: (dbg) Run:  kubectl --context addons-984861 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-984861 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.72s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (17.28s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-984861 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-984861 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-984861 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [890e8de4-d58c-4b39-a134-584356af66e1] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [890e8de4-d58c-4b39-a134-584356af66e1] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 6.003011165s
I1222 00:09:00.574202 1396864 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-arm64 -p addons-984861 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:290: (dbg) Run:  kubectl --context addons-984861 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-arm64 -p addons-984861 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-984861 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-984861 addons disable ingress-dns --alsologtostderr -v=1: (1.693346321s)
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-984861 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-984861 addons disable ingress --alsologtostderr -v=1: (7.958226834s)
--- PASS: TestAddons/parallel/Ingress (17.28s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.77s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-78lzq" [73f4de1e-bc18-4498-99fc-e68d678041fe] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003653867s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-984861 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-984861 addons disable inspektor-gadget --alsologtostderr -v=1: (5.765601658s)
--- PASS: TestAddons/parallel/InspektorGadget (11.77s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.92s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 3.58923ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-85b7d694d7-76f8l" [73278f0a-dcb3-482e-99af-7ade85dc855f] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004791498s
addons_test.go:465: (dbg) Run:  kubectl --context addons-984861 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-984861 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.92s)

                                                
                                    
x
+
TestAddons/parallel/CSI (48.06s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1222 00:07:42.175538 1396864 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1222 00:07:42.181411 1396864 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1222 00:07:42.181441 1396864 kapi.go:107] duration metric: took 10.223097ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 10.237547ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-984861 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-984861 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-984861 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-984861 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-984861 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-984861 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-984861 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-984861 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-984861 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-984861 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-984861 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-984861 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-984861 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-984861 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-984861 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-984861 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [3c6a5e8b-5724-4639-8ade-f3bbc462b515] Pending
helpers_test.go:353: "task-pv-pod" [3c6a5e8b-5724-4639-8ade-f3bbc462b515] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod" [3c6a5e8b-5724-4639-8ade-f3bbc462b515] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.003798661s
addons_test.go:574: (dbg) Run:  kubectl --context addons-984861 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-984861 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:428: (dbg) Run:  kubectl --context addons-984861 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-984861 delete pod task-pv-pod
addons_test.go:590: (dbg) Run:  kubectl --context addons-984861 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-984861 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-984861 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-984861 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-984861 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-984861 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-984861 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-984861 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-984861 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-984861 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-984861 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [ee951ebb-5281-4988-b0c7-e74236177442] Pending
helpers_test.go:353: "task-pv-pod-restore" [ee951ebb-5281-4988-b0c7-e74236177442] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod-restore" [ee951ebb-5281-4988-b0c7-e74236177442] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003544812s
addons_test.go:616: (dbg) Run:  kubectl --context addons-984861 delete pod task-pv-pod-restore
addons_test.go:620: (dbg) Run:  kubectl --context addons-984861 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-984861 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-984861 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-984861 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-984861 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.03062576s)
--- PASS: TestAddons/parallel/CSI (48.06s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (11.24s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-984861 --alsologtostderr -v=1
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:353: "headlamp-7f5bcd4678-6nknf" [19fc056a-d130-4195-bcef-63eab02e06be] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:353: "headlamp-7f5bcd4678-6nknf" [19fc056a-d130-4195-bcef-63eab02e06be] Running
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.003372632s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-984861 addons disable headlamp --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Headlamp (11.24s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.61s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-85df47b6f4-xm7w4" [a0407843-1e17-4b2d-acdc-2631c097b206] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003342177s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-984861 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.61s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.37s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-984861 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-984861 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-984861 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-984861 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-984861 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-984861 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-984861 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-984861 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [425effe2-ff45-420d-b82e-e1aebf5574f9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [425effe2-ff45-420d-b82e-e1aebf5574f9] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [425effe2-ff45-420d-b82e-e1aebf5574f9] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.004091465s
addons_test.go:969: (dbg) Run:  kubectl --context addons-984861 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-linux-arm64 -p addons-984861 ssh "cat /opt/local-path-provisioner/pvc-67fe2a11-a44e-43d1-a970-2847916e731f_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-984861 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-984861 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-984861 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-984861 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.036029s)
--- PASS: TestAddons/parallel/LocalPath (53.37s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.56s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-jk28q" [2c3176c4-3c4c-43be-8ad1-264f39ddc56b] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003494559s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-984861 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.56s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.88s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-6654c87f9b-gqh4f" [409faf6f-e078-4afc-b90a-841c966ee508] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004254054s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-984861 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-984861 addons disable yakd --alsologtostderr -v=1: (5.875669852s)
--- PASS: TestAddons/parallel/Yakd (10.88s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.48s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-984861
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-984861: (12.180842228s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-984861
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-984861
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-984861
--- PASS: TestAddons/StoppedEnableDisable (12.48s)

                                                
                                    
x
+
TestCertOptions (34.24s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-448220 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-448220 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (31.124579312s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-448220 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-448220 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-448220 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-448220" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-448220
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-448220: (2.077583596s)
--- PASS: TestCertOptions (34.24s)

                                                
                                    
x
+
TestCertExpiration (222.09s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-007057 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-007057 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (31.32451025s)
E1222 01:20:56.762752 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:21:29.153474 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/addons-984861/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-007057 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-007057 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (7.797300247s)
helpers_test.go:176: Cleaning up "cert-expiration-007057" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-007057
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-007057: (2.969343721s)
--- PASS: TestCertExpiration (222.09s)

                                                
                                    
x
+
TestForceSystemdFlag (38.13s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-018086 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-018086 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (35.701757359s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-018086 ssh "cat /etc/containerd/config.toml"
helpers_test.go:176: Cleaning up "force-systemd-flag-018086" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-018086
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-018086: (2.110166881s)
--- PASS: TestForceSystemdFlag (38.13s)

                                                
                                    
x
+
TestForceSystemdEnv (39.08s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-757246 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E1222 01:17:52.210903 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/addons-984861/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:18:07.826649 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-722318/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-757246 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (36.714997328s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-757246 ssh "cat /etc/containerd/config.toml"
helpers_test.go:176: Cleaning up "force-systemd-env-757246" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-757246
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-757246: (2.044237684s)
--- PASS: TestForceSystemdEnv (39.08s)

                                                
                                    
x
+
TestDockerEnvContainerd (48.15s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-629803 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-629803 --driver=docker  --container-runtime=containerd: (32.651852184s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-629803"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-629803": (1.087802991s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-CSDuwV3HBjVd/agent.1416101" SSH_AGENT_PID="1416102" DOCKER_HOST=ssh://docker@127.0.0.1:38375 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-CSDuwV3HBjVd/agent.1416101" SSH_AGENT_PID="1416102" DOCKER_HOST=ssh://docker@127.0.0.1:38375 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-CSDuwV3HBjVd/agent.1416101" SSH_AGENT_PID="1416102" DOCKER_HOST=ssh://docker@127.0.0.1:38375 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.277648236s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-CSDuwV3HBjVd/agent.1416101" SSH_AGENT_PID="1416102" DOCKER_HOST=ssh://docker@127.0.0.1:38375 docker image ls"
helpers_test.go:176: Cleaning up "dockerenv-629803" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-629803
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-629803: (2.136789807s)
--- PASS: TestDockerEnvContainerd (48.15s)

                                                
                                    
x
+
TestErrorSpam/setup (32.63s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-122636 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-122636 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-122636 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-122636 --driver=docker  --container-runtime=containerd: (32.632218764s)
--- PASS: TestErrorSpam/setup (32.63s)

                                                
                                    
x
+
TestErrorSpam/start (0.85s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-122636 --log_dir /tmp/nospam-122636 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-122636 --log_dir /tmp/nospam-122636 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-122636 --log_dir /tmp/nospam-122636 start --dry-run
--- PASS: TestErrorSpam/start (0.85s)

                                                
                                    
x
+
TestErrorSpam/status (1.19s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-122636 --log_dir /tmp/nospam-122636 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-122636 --log_dir /tmp/nospam-122636 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-122636 --log_dir /tmp/nospam-122636 status
--- PASS: TestErrorSpam/status (1.19s)

                                                
                                    
x
+
TestErrorSpam/pause (1.68s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-122636 --log_dir /tmp/nospam-122636 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-122636 --log_dir /tmp/nospam-122636 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-122636 --log_dir /tmp/nospam-122636 pause
--- PASS: TestErrorSpam/pause (1.68s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.85s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-122636 --log_dir /tmp/nospam-122636 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-122636 --log_dir /tmp/nospam-122636 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-122636 --log_dir /tmp/nospam-122636 unpause
--- PASS: TestErrorSpam/unpause (1.85s)

                                                
                                    
x
+
TestErrorSpam/stop (1.61s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-122636 --log_dir /tmp/nospam-122636 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-122636 --log_dir /tmp/nospam-122636 stop: (1.390354205s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-122636 --log_dir /tmp/nospam-122636 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-122636 --log_dir /tmp/nospam-122636 stop
--- PASS: TestErrorSpam/stop (1.61s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.01s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/test/nested/copy/1396864/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.01s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (49.8s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-722318 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
E1222 00:11:29.161668 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/addons-984861/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:11:29.166952 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/addons-984861/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:11:29.177247 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/addons-984861/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:11:29.197509 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/addons-984861/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:11:29.237776 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/addons-984861/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:11:29.318072 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/addons-984861/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:11:29.478487 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/addons-984861/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:11:29.799102 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/addons-984861/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:11:30.439387 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/addons-984861/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:11:31.719619 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/addons-984861/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:11:34.279879 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/addons-984861/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:11:39.400796 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/addons-984861/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:11:49.641758 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/addons-984861/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-722318 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (49.788176089s)
--- PASS: TestFunctional/serial/StartWithProxy (49.80s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (7.08s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1222 00:11:52.759668 1396864 config.go:182] Loaded profile config "functional-722318": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-722318 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-722318 --alsologtostderr -v=8: (7.079589286s)
functional_test.go:678: soft start took 7.083078171s for "functional-722318" cluster.
I1222 00:11:59.839594 1396864 config.go:182] Loaded profile config "functional-722318": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
--- PASS: TestFunctional/serial/SoftStart (7.08s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.08s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-722318 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.8s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-722318 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-722318 cache add registry.k8s.io/pause:3.1: (1.585642377s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-722318 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-722318 cache add registry.k8s.io/pause:3.3: (1.160766846s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-722318 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-722318 cache add registry.k8s.io/pause:latest: (1.055722588s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.80s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.35s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-722318 /tmp/TestFunctionalserialCacheCmdcacheadd_local1945088347/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-722318 cache add minikube-local-cache-test:functional-722318
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-722318 cache delete minikube-local-cache-test:functional-722318
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-722318
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.35s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-722318 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.89s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-722318 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-722318 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-722318 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (291.92215ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-722318 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-722318 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.89s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-722318 kubectl -- --context functional-722318 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-722318 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (47.45s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-722318 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1222 00:12:10.122212 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/addons-984861/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:12:51.082464 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/addons-984861/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-722318 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (47.450697139s)
functional_test.go:776: restart took 47.450814891s for "functional-722318" cluster.
I1222 00:12:55.329847 1396864 config.go:182] Loaded profile config "functional-722318": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
--- PASS: TestFunctional/serial/ExtraConfig (47.45s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-722318 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.45s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-722318 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-722318 logs: (1.447085632s)
--- PASS: TestFunctional/serial/LogsCmd (1.45s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.52s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-722318 logs --file /tmp/TestFunctionalserialLogsFileCmd1120310573/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-722318 logs --file /tmp/TestFunctionalserialLogsFileCmd1120310573/001/logs.txt: (1.51844044s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.52s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.34s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-722318 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-722318
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-722318: exit status 115 (403.527506ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31029 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-722318 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.34s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-722318 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-722318 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-722318 config get cpus: exit status 14 (80.410837ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-722318 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-722318 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-722318 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-722318 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-722318 config get cpus: exit status 14 (73.496122ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-722318 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-722318 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 1432886: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.01s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-722318 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-722318 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (236.719707ms)

                                                
                                                
-- stdout --
	* [functional-722318] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22179
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22179-1395000/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1395000/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1222 00:13:38.307510 1432100 out.go:360] Setting OutFile to fd 1 ...
	I1222 00:13:38.307662 1432100 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:13:38.307704 1432100 out.go:374] Setting ErrFile to fd 2...
	I1222 00:13:38.307717 1432100 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:13:38.307982 1432100 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1395000/.minikube/bin
	I1222 00:13:38.308374 1432100 out.go:368] Setting JSON to false
	I1222 00:13:38.309573 1432100 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":111371,"bootTime":1766251047,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1222 00:13:38.309648 1432100 start.go:143] virtualization:  
	I1222 00:13:38.313020 1432100 out.go:179] * [functional-722318] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1222 00:13:38.317021 1432100 out.go:179]   - MINIKUBE_LOCATION=22179
	I1222 00:13:38.317073 1432100 notify.go:221] Checking for updates...
	I1222 00:13:38.322795 1432100 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 00:13:38.325691 1432100 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-1395000/kubeconfig
	I1222 00:13:38.334230 1432100 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1395000/.minikube
	I1222 00:13:38.338277 1432100 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1222 00:13:38.341267 1432100 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 00:13:38.344527 1432100 config.go:182] Loaded profile config "functional-722318": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1222 00:13:38.345197 1432100 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 00:13:38.380176 1432100 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1222 00:13:38.380396 1432100 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 00:13:38.464213 1432100 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-22 00:13:38.45103599 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 00:13:38.464311 1432100 docker.go:319] overlay module found
	I1222 00:13:38.467831 1432100 out.go:179] * Using the docker driver based on existing profile
	I1222 00:13:38.471588 1432100 start.go:309] selected driver: docker
	I1222 00:13:38.471611 1432100 start.go:928] validating driver "docker" against &{Name:functional-722318 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:functional-722318 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 00:13:38.471731 1432100 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 00:13:38.475230 1432100 out.go:203] 
	W1222 00:13:38.478053 1432100 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1222 00:13:38.481418 1432100 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-722318 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-722318 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-722318 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (267.023261ms)

                                                
                                                
-- stdout --
	* [functional-722318] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22179
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22179-1395000/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1395000/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1222 00:13:38.842899 1432293 out.go:360] Setting OutFile to fd 1 ...
	I1222 00:13:38.843132 1432293 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:13:38.843186 1432293 out.go:374] Setting ErrFile to fd 2...
	I1222 00:13:38.843207 1432293 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:13:38.846987 1432293 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1395000/.minikube/bin
	I1222 00:13:38.848510 1432293 out.go:368] Setting JSON to false
	I1222 00:13:38.849817 1432293 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":111372,"bootTime":1766251047,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1222 00:13:38.849924 1432293 start.go:143] virtualization:  
	I1222 00:13:38.854201 1432293 out.go:179] * [functional-722318] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1222 00:13:38.858311 1432293 notify.go:221] Checking for updates...
	I1222 00:13:38.861952 1432293 out.go:179]   - MINIKUBE_LOCATION=22179
	I1222 00:13:38.865277 1432293 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 00:13:38.868249 1432293 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-1395000/kubeconfig
	I1222 00:13:38.871178 1432293 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1395000/.minikube
	I1222 00:13:38.874213 1432293 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1222 00:13:38.877209 1432293 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 00:13:38.880648 1432293 config.go:182] Loaded profile config "functional-722318": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1222 00:13:38.881303 1432293 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 00:13:38.943761 1432293 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1222 00:13:38.943900 1432293 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 00:13:39.022713 1432293 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-22 00:13:39.011846306 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 00:13:39.022823 1432293 docker.go:319] overlay module found
	I1222 00:13:39.025882 1432293 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1222 00:13:39.028821 1432293 start.go:309] selected driver: docker
	I1222 00:13:39.028849 1432293 start.go:928] validating driver "docker" against &{Name:functional-722318 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:functional-722318 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 00:13:39.028968 1432293 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 00:13:39.032575 1432293 out.go:203] 
	W1222 00:13:39.035502 1432293 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1222 00:13:39.038482 1432293 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-722318 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-722318 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-722318 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-722318 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-722318 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-7d85dfc575-k5qpt" [0a570d89-0dea-4afb-aaa6-c44c35626359] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-7d85dfc575-k5qpt" [0a570d89-0dea-4afb-aaa6-c44c35626359] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.003146869s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-arm64 -p functional-722318 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:32707
functional_test.go:1680: http://192.168.49.2:32707: success! body:
Request served by hello-node-connect-7d85dfc575-k5qpt

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:32707
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.78s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-722318 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-722318 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (20.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [78a4ce11-8599-4762-8338-a96abd710a88] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003590909s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-722318 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-722318 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-722318 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-722318 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [649d8d02-ffd6-4c44-966a-4462b8c9c1ae] Pending
helpers_test.go:353: "sp-pod" [649d8d02-ffd6-4c44-966a-4462b8c9c1ae] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.003792302s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-722318 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-722318 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-722318 delete -f testdata/storage-provisioner/pod.yaml: (1.383101644s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-722318 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [962d3c87-6257-4503-aff1-490bcf7f285e] Pending
helpers_test.go:353: "sp-pod" [962d3c87-6257-4503-aff1-490bcf7f285e] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003190125s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-722318 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (20.57s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-722318 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-722318 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-722318 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-722318 ssh -n functional-722318 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-722318 cp functional-722318:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2030658272/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-722318 ssh -n functional-722318 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-722318 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-722318 ssh -n functional-722318 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.09s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/1396864/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-722318 ssh "sudo cat /etc/test/nested/copy/1396864/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/1396864.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-722318 ssh "sudo cat /etc/ssl/certs/1396864.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/1396864.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-722318 ssh "sudo cat /usr/share/ca-certificates/1396864.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-722318 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/13968642.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-722318 ssh "sudo cat /etc/ssl/certs/13968642.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/13968642.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-722318 ssh "sudo cat /usr/share/ca-certificates/13968642.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-722318 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.23s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-722318 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-722318 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-722318 ssh "sudo systemctl is-active docker": exit status 1 (385.242121ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-722318 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-722318 ssh "sudo systemctl is-active crio": exit status 1 (393.114359ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-722318 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-722318 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-722318 version -o=json --components: (1.566140936s)
--- PASS: TestFunctional/parallel/Version/components (1.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-722318 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-722318 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.3
registry.k8s.io/kube-proxy:v1.34.3
registry.k8s.io/kube-controller-manager:v1.34.3
registry.k8s.io/kube-apiserver:v1.34.3
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
public.ecr.aws/nginx/nginx:alpine
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/minikube-local-cache-test:functional-722318
docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
docker.io/kicbase/echo-server:functional-722318
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-722318 image ls --format short --alsologtostderr:
I1222 00:13:46.719824 1433947 out.go:360] Setting OutFile to fd 1 ...
I1222 00:13:46.719978 1433947 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1222 00:13:46.719990 1433947 out.go:374] Setting ErrFile to fd 2...
I1222 00:13:46.719995 1433947 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1222 00:13:46.720261 1433947 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1395000/.minikube/bin
I1222 00:13:46.720864 1433947 config.go:182] Loaded profile config "functional-722318": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
I1222 00:13:46.721007 1433947 config.go:182] Loaded profile config "functional-722318": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
I1222 00:13:46.721581 1433947 cli_runner.go:164] Run: docker container inspect functional-722318 --format={{.State.Status}}
I1222 00:13:46.748077 1433947 ssh_runner.go:195] Run: systemctl --version
I1222 00:13:46.748140 1433947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-722318
I1222 00:13:46.767928 1433947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38385 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/functional-722318/id_rsa Username:docker}
I1222 00:13:46.877440 1433947 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-722318 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-722318 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬───────────────────────────────────────┬───────────────┬────────┐
│                    IMAGE                    │                  TAG                  │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼───────────────────────────────────────┼───────────────┼────────┤
│ docker.io/kindest/kindnetd                  │ v20250512-df8de77b                    │ sha256:b1a8c6 │ 40.6MB │
│ docker.io/kindest/kindnetd                  │ v20251212-v0.29.0-alpha-105-g20ccfc88 │ sha256:c96ee3 │ 38.5MB │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                                    │ sha256:ba04bb │ 8.03MB │
│ registry.k8s.io/coredns/coredns             │ v1.12.1                               │ sha256:138784 │ 20.4MB │
│ registry.k8s.io/kube-proxy                  │ v1.34.3                               │ sha256:4461da │ 22.8MB │
│ registry.k8s.io/pause                       │ 3.1                                   │ sha256:8057e0 │ 262kB  │
│ registry.k8s.io/pause                       │ latest                                │ sha256:8cb209 │ 71.3kB │
│ docker.io/library/minikube-local-cache-test │ functional-722318                     │ sha256:082049 │ 992B   │
│ public.ecr.aws/nginx/nginx                  │ alpine                                │ sha256:962dbb │ 23MB   │
│ registry.k8s.io/etcd                        │ 3.6.5-0                               │ sha256:2c5f0d │ 21.1MB │
│ registry.k8s.io/kube-apiserver              │ v1.34.3                               │ sha256:cf65ae │ 24.6MB │
│ registry.k8s.io/kube-controller-manager     │ v1.34.3                               │ sha256:7ada8f │ 20.7MB │
│ registry.k8s.io/kube-scheduler              │ v1.34.3                               │ sha256:2f2aa2 │ 15.8MB │
│ registry.k8s.io/pause                       │ 3.10.1                                │ sha256:d7b100 │ 268kB  │
│ registry.k8s.io/pause                       │ 3.3                                   │ sha256:3d1873 │ 249kB  │
│ docker.io/kicbase/echo-server               │ functional-722318                     │ sha256:ce2d2c │ 2.17MB │
│ docker.io/kicbase/echo-server               │ latest                                │ sha256:ce2d2c │ 2.17MB │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc                          │ sha256:1611cd │ 1.94MB │
└─────────────────────────────────────────────┴───────────────────────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-722318 image ls --format table --alsologtostderr:
I1222 00:13:48.350216 1434201 out.go:360] Setting OutFile to fd 1 ...
I1222 00:13:48.350419 1434201 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1222 00:13:48.350549 1434201 out.go:374] Setting ErrFile to fd 2...
I1222 00:13:48.350575 1434201 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1222 00:13:48.350928 1434201 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1395000/.minikube/bin
I1222 00:13:48.351645 1434201 config.go:182] Loaded profile config "functional-722318": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
I1222 00:13:48.351830 1434201 config.go:182] Loaded profile config "functional-722318": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
I1222 00:13:48.352381 1434201 cli_runner.go:164] Run: docker container inspect functional-722318 --format={{.State.Status}}
I1222 00:13:48.371393 1434201 ssh_runner.go:195] Run: systemctl --version
I1222 00:13:48.371453 1434201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-722318
I1222 00:13:48.389918 1434201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38385 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/functional-722318/id_rsa Username:docker}
I1222 00:13:48.489260 1434201 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-722318 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-722318 image ls --format json --alsologtostderr:
[{"id":"sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"40636774"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:962dbbc0e55ec93371166cf3e1f723875ce281259bb90b8092248398555aff67","repoDigests":["public.ecr.aws/nginx/nginx@sha256:a411c634df4374901a4a9370626801998f159652f627b1cdfbbbe012adcd6c76"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"22987510"},{"id":"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"s
ize":"21136588"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6"],"repoTags":["docker.io/kicbase/echo-server:functional-722318","docker.io/kicbase/echo-server:latest"],"size":"2173567"},{"id":"sha256:082049fa7835e23c46c09f80be520b2afb0d7d032957be9d461df564fef85ac1","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-722318"],"size":"992"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf571
2ba54ef5b4a6361d1162","repoDigests":["registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.3"],"size":"22804272"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"267939"},{"id":"sha256:c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13","repoDigests":["docker.io/kindest/kindnetd@sha256:377e2e7a513148f7c942b51cd57bdce1589940df856105384ac7f753a1ab43ae"],"repoTags":["docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88"],"size":"38502448"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minik
ube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.3"],"size":"20719958"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad0
45384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"20392204"},{"id":"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896","repoDigests":["registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.3"],"size":"24567639"},{"id":"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6","repoDigests":["registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.3"],"size":"15776215"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-722318 image ls --format json --alsologtostderr:
I1222 00:13:48.119612 1434164 out.go:360] Setting OutFile to fd 1 ...
I1222 00:13:48.119779 1434164 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1222 00:13:48.119818 1434164 out.go:374] Setting ErrFile to fd 2...
I1222 00:13:48.119840 1434164 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1222 00:13:48.120136 1434164 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1395000/.minikube/bin
I1222 00:13:48.120848 1434164 config.go:182] Loaded profile config "functional-722318": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
I1222 00:13:48.121070 1434164 config.go:182] Loaded profile config "functional-722318": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
I1222 00:13:48.121621 1434164 cli_runner.go:164] Run: docker container inspect functional-722318 --format={{.State.Status}}
I1222 00:13:48.139835 1434164 ssh_runner.go:195] Run: systemctl --version
I1222 00:13:48.139897 1434164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-722318
I1222 00:13:48.159554 1434164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38385 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/functional-722318/id_rsa Username:docker}
I1222 00:13:48.261263 1434164 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-722318 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-722318 image ls --format yaml --alsologtostderr:
- id: sha256:c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13
repoDigests:
- docker.io/kindest/kindnetd@sha256:377e2e7a513148f7c942b51cd57bdce1589940df856105384ac7f753a1ab43ae
repoTags:
- docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
size: "38502448"
- id: sha256:962dbbc0e55ec93371166cf3e1f723875ce281259bb90b8092248398555aff67
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:a411c634df4374901a4a9370626801998f159652f627b1cdfbbbe012adcd6c76
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "22987510"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
repoTags:
- docker.io/kicbase/echo-server:functional-722318
- docker.io/kicbase/echo-server:latest
size: "2173567"
- id: sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "40636774"
- id: sha256:082049fa7835e23c46c09f80be520b2afb0d7d032957be9d461df564fef85ac1
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-722318
size: "992"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "21136588"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "267939"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "20392204"
- id: sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.3
size: "24567639"
- id: sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.3
size: "20719958"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162
repoDigests:
- registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6
repoTags:
- registry.k8s.io/kube-proxy:v1.34.3
size: "22804272"
- id: sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.3
size: "15776215"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-722318 image ls --format yaml --alsologtostderr:
I1222 00:13:46.992071 1433987 out.go:360] Setting OutFile to fd 1 ...
I1222 00:13:46.992312 1433987 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1222 00:13:46.992349 1433987 out.go:374] Setting ErrFile to fd 2...
I1222 00:13:46.992371 1433987 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1222 00:13:46.992660 1433987 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1395000/.minikube/bin
I1222 00:13:46.993385 1433987 config.go:182] Loaded profile config "functional-722318": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
I1222 00:13:46.993617 1433987 config.go:182] Loaded profile config "functional-722318": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
I1222 00:13:46.994189 1433987 cli_runner.go:164] Run: docker container inspect functional-722318 --format={{.State.Status}}
I1222 00:13:47.014037 1433987 ssh_runner.go:195] Run: systemctl --version
I1222 00:13:47.014142 1433987 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-722318
I1222 00:13:47.038165 1433987 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38385 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/functional-722318/id_rsa Username:docker}
I1222 00:13:47.140871 1433987 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-722318 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-722318 ssh pgrep buildkitd: exit status 1 (272.519509ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-722318 image build -t localhost/my-image:functional-722318 testdata/build --alsologtostderr
2025/12/22 00:13:47 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-722318 image build -t localhost/my-image:functional-722318 testdata/build --alsologtostderr: (3.509919567s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-722318 image build -t localhost/my-image:functional-722318 testdata/build --alsologtostderr:
I1222 00:13:47.503140 1434091 out.go:360] Setting OutFile to fd 1 ...
I1222 00:13:47.504522 1434091 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1222 00:13:47.504592 1434091 out.go:374] Setting ErrFile to fd 2...
I1222 00:13:47.504608 1434091 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1222 00:13:47.504906 1434091 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1395000/.minikube/bin
I1222 00:13:47.505597 1434091 config.go:182] Loaded profile config "functional-722318": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
I1222 00:13:47.509052 1434091 config.go:182] Loaded profile config "functional-722318": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
I1222 00:13:47.509672 1434091 cli_runner.go:164] Run: docker container inspect functional-722318 --format={{.State.Status}}
I1222 00:13:47.531248 1434091 ssh_runner.go:195] Run: systemctl --version
I1222 00:13:47.531307 1434091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-722318
I1222 00:13:47.551378 1434091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38385 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/functional-722318/id_rsa Username:docker}
I1222 00:13:47.650606 1434091 build_images.go:162] Building image from path: /tmp/build.3024839969.tar
I1222 00:13:47.650683 1434091 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1222 00:13:47.659138 1434091 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3024839969.tar
I1222 00:13:47.663301 1434091 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3024839969.tar: stat -c "%s %y" /var/lib/minikube/build/build.3024839969.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3024839969.tar': No such file or directory
I1222 00:13:47.663334 1434091 ssh_runner.go:362] scp /tmp/build.3024839969.tar --> /var/lib/minikube/build/build.3024839969.tar (3072 bytes)
I1222 00:13:47.684221 1434091 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3024839969
I1222 00:13:47.692481 1434091 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3024839969 -xf /var/lib/minikube/build/build.3024839969.tar
I1222 00:13:47.701000 1434091 containerd.go:394] Building image: /var/lib/minikube/build/build.3024839969
I1222 00:13:47.701113 1434091 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3024839969 --local dockerfile=/var/lib/minikube/build/build.3024839969 --output type=image,name=localhost/my-image:functional-722318
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.5s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.6s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:1adad8163d9c3e5cbb36581cb0b5ef1b27df3fb4a9e111752aabe7b148fb02e0
#8 exporting manifest sha256:1adad8163d9c3e5cbb36581cb0b5ef1b27df3fb4a9e111752aabe7b148fb02e0 0.0s done
#8 exporting config sha256:a8e367d25e5bb82d74b57b1112c41a2e51e9af09ab86b2eb8988f07147cbd2e0 0.0s done
#8 naming to localhost/my-image:functional-722318 done
#8 DONE 0.2s
I1222 00:13:50.931758 1434091 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3024839969 --local dockerfile=/var/lib/minikube/build/build.3024839969 --output type=image,name=localhost/my-image:functional-722318: (3.230608985s)
I1222 00:13:50.931832 1434091 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3024839969
I1222 00:13:50.940062 1434091 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3024839969.tar
I1222 00:13:50.948696 1434091 build_images.go:218] Built localhost/my-image:functional-722318 from /tmp/build.3024839969.tar
I1222 00:13:50.948728 1434091 build_images.go:134] succeeded building to: functional-722318
I1222 00:13:50.948734 1434091 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-722318 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-722318
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-722318 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-722318 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-722318 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-722318 image load --daemon kicbase/echo-server:functional-722318 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-722318 image load --daemon kicbase/echo-server:functional-722318 --alsologtostderr: (1.171754953s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-722318 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-722318 image load --daemon kicbase/echo-server:functional-722318 --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-arm64 -p functional-722318 image load --daemon kicbase/echo-server:functional-722318 --alsologtostderr: (1.079553049s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-722318 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-722318 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-722318 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-75c85bcc94-z8hq6" [ce8a34fd-4c61-4cb6-9f01-abec482b2ead] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-75c85bcc94-z8hq6" [ce8a34fd-4c61-4cb6-9f01-abec482b2ead] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.002877924s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-722318
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-722318 image load --daemon kicbase/echo-server:functional-722318 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-722318 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-722318 image save kicbase/echo-server:functional-722318 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-722318 image rm kicbase/echo-server:functional-722318 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-722318 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-722318 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-722318 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-722318
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-722318 image save --daemon kicbase/echo-server:functional-722318 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-722318
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-722318 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-722318 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-722318 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 1429801: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-722318 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-722318 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-722318 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [d79154fd-f25a-44db-96d6-6ac636e22182] Pending
helpers_test.go:353: "nginx-svc" [d79154fd-f25a-44db-96d6-6ac636e22182] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [d79154fd-f25a-44db-96d6-6ac636e22182] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003607042s
I1222 00:13:21.632089 1396864 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-722318 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-722318 service list -o json
functional_test.go:1504: Took "353.581335ms" to run "out/minikube-linux-arm64 -p functional-722318 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-722318 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:32505
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-722318 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-722318 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:32505
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-722318 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.108.155.82 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-722318 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "459.024182ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "52.148156ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "372.260855ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "59.736682ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:74: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-722318 /tmp/TestFunctionalparallelMountCmdany-port4211381619/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:108: wrote "test-1766362412407650556" to /tmp/TestFunctionalparallelMountCmdany-port4211381619/001/created-by-test
functional_test_mount_test.go:108: wrote "test-1766362412407650556" to /tmp/TestFunctionalparallelMountCmdany-port4211381619/001/created-by-test-removed-by-pod
functional_test_mount_test.go:108: wrote "test-1766362412407650556" to /tmp/TestFunctionalparallelMountCmdany-port4211381619/001/test-1766362412407650556
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-arm64 -p functional-722318 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:116: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-722318 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (329.844831ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1222 00:13:32.739198 1396864 retry.go:84] will retry after 400ms: exit status 1
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-arm64 -p functional-722318 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-linux-arm64 -p functional-722318 ssh -- ls -la /mount-9p
functional_test_mount_test.go:134: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 22 00:13 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 22 00:13 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 22 00:13 test-1766362412407650556
functional_test_mount_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p functional-722318 ssh cat /mount-9p/test-1766362412407650556
functional_test_mount_test.go:149: (dbg) Run:  kubectl --context functional-722318 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:154: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [7a5a1b9d-00e2-424f-8793-804797c4e171] Pending
helpers_test.go:353: "busybox-mount" [7a5a1b9d-00e2-424f-8793-804797c4e171] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [7a5a1b9d-00e2-424f-8793-804797c4e171] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [7a5a1b9d-00e2-424f-8793-804797c4e171] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:154: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003798428s
functional_test_mount_test.go:170: (dbg) Run:  kubectl --context functional-722318 logs busybox-mount
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p functional-722318 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p functional-722318 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:91: (dbg) Run:  out/minikube-linux-arm64 -p functional-722318 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:95: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-722318 /tmp/TestFunctionalparallelMountCmdany-port4211381619/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.05s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:219: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-722318 /tmp/TestFunctionalparallelMountCmdspecific-port4022870112/001:/mount-9p --alsologtostderr -v=1 --port 45835]
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-arm64 -p functional-722318 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:249: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-722318 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (578.117719ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1222 00:13:41.031743 1396864 retry.go:84] will retry after 500ms: exit status 1
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-arm64 -p functional-722318 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:263: (dbg) Run:  out/minikube-linux-arm64 -p functional-722318 ssh -- ls -la /mount-9p
functional_test_mount_test.go:267: guest mount directory contents
total 0
functional_test_mount_test.go:269: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-722318 /tmp/TestFunctionalparallelMountCmdspecific-port4022870112/001:/mount-9p --alsologtostderr -v=1 --port 45835] ...
functional_test_mount_test.go:270: reading mount text
functional_test_mount_test.go:284: done reading mount text
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-arm64 -p functional-722318 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-722318 ssh "sudo umount -f /mount-9p": exit status 1 (363.915188ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:238: "out/minikube-linux-arm64 -p functional-722318 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:240: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-722318 /tmp/TestFunctionalparallelMountCmdspecific-port4022870112/001:/mount-9p --alsologtostderr -v=1 --port 45835] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.24s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-722318 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1841505609/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-722318 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1841505609/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-722318 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1841505609/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-arm64 -p functional-722318 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-arm64 -p functional-722318 ssh "findmnt -T" /mount2
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-arm64 -p functional-722318 ssh "findmnt -T" /mount3
functional_test_mount_test.go:376: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-722318 --kill=true
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-722318 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1841505609/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-722318 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1841505609/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-722318 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1841505609/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.70s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-722318
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-722318
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-722318
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/test/nested/copy/1396864/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote (3.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-973657 cache add registry.k8s.io/pause:3.1: (1.11625326s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-973657 cache add registry.k8s.io/pause:3.3: (1.103956722s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-973657 cache add registry.k8s.io/pause:latest: (1.045467335s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote (3.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local (1.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-973657 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1serialCacheC1244756985/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 cache add minikube-local-cache-test:functional-973657
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 cache delete minikube-local-cache-test:functional-973657
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-973657
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local (1.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload (1.82s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-973657 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (280.840245ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload (1.82s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete (0.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete (0.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd (0.95s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 logs
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd (0.95s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd (0.97s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1serialLogsFi3973936297/001/logs.txt
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd (0.97s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd (0.57s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-973657 config get cpus: exit status 14 (99.297007ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-973657 config get cpus: exit status 14 (75.539117ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd (0.57s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun (0.43s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-973657 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-973657 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1: exit status 23 (184.974942ms)

                                                
                                                
-- stdout --
	* [functional-973657] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22179
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22179-1395000/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1395000/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1222 00:43:20.217505 1463748 out.go:360] Setting OutFile to fd 1 ...
	I1222 00:43:20.217841 1463748 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:43:20.217864 1463748 out.go:374] Setting ErrFile to fd 2...
	I1222 00:43:20.217871 1463748 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:43:20.218217 1463748 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1395000/.minikube/bin
	I1222 00:43:20.218612 1463748 out.go:368] Setting JSON to false
	I1222 00:43:20.219569 1463748 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":113153,"bootTime":1766251047,"procs":162,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1222 00:43:20.219644 1463748 start.go:143] virtualization:  
	I1222 00:43:20.222887 1463748 out.go:179] * [functional-973657] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1222 00:43:20.226562 1463748 out.go:179]   - MINIKUBE_LOCATION=22179
	I1222 00:43:20.226678 1463748 notify.go:221] Checking for updates...
	I1222 00:43:20.232195 1463748 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 00:43:20.235325 1463748 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-1395000/kubeconfig
	I1222 00:43:20.238258 1463748 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1395000/.minikube
	I1222 00:43:20.241144 1463748 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1222 00:43:20.244115 1463748 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 00:43:20.247504 1463748 config.go:182] Loaded profile config "functional-973657": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1222 00:43:20.248241 1463748 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 00:43:20.268941 1463748 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1222 00:43:20.269063 1463748 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 00:43:20.330624 1463748 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 00:43:20.320856699 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 00:43:20.330738 1463748 docker.go:319] overlay module found
	I1222 00:43:20.333908 1463748 out.go:179] * Using the docker driver based on existing profile
	I1222 00:43:20.336643 1463748 start.go:309] selected driver: docker
	I1222 00:43:20.336669 1463748 start.go:928] validating driver "docker" against &{Name:functional-973657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-973657 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 00:43:20.336777 1463748 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 00:43:20.340299 1463748 out.go:203] 
	W1222 00:43:20.343211 1463748 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1222 00:43:20.345880 1463748 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-973657 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun (0.43s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage (0.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-973657 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-973657 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1: exit status 23 (220.222494ms)

                                                
                                                
-- stdout --
	* [functional-973657] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22179
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22179-1395000/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1395000/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1222 00:43:20.011587 1463694 out.go:360] Setting OutFile to fd 1 ...
	I1222 00:43:20.011884 1463694 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:43:20.011898 1463694 out.go:374] Setting ErrFile to fd 2...
	I1222 00:43:20.011904 1463694 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:43:20.012392 1463694 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1395000/.minikube/bin
	I1222 00:43:20.012872 1463694 out.go:368] Setting JSON to false
	I1222 00:43:20.013919 1463694 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":113153,"bootTime":1766251047,"procs":163,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1222 00:43:20.013997 1463694 start.go:143] virtualization:  
	I1222 00:43:20.018308 1463694 out.go:179] * [functional-973657] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1222 00:43:20.021844 1463694 out.go:179]   - MINIKUBE_LOCATION=22179
	I1222 00:43:20.022054 1463694 notify.go:221] Checking for updates...
	I1222 00:43:20.028060 1463694 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 00:43:20.031059 1463694 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-1395000/kubeconfig
	I1222 00:43:20.034189 1463694 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1395000/.minikube
	I1222 00:43:20.037154 1463694 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1222 00:43:20.040210 1463694 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 00:43:20.043641 1463694 config.go:182] Loaded profile config "functional-973657": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1222 00:43:20.044289 1463694 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 00:43:20.086890 1463694 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1222 00:43:20.087029 1463694 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 00:43:20.145321 1463694 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 00:43:20.134923356 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 00:43:20.145439 1463694 docker.go:319] overlay module found
	I1222 00:43:20.148548 1463694 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1222 00:43:20.151463 1463694 start.go:309] selected driver: docker
	I1222 00:43:20.151486 1463694 start.go:928] validating driver "docker" against &{Name:functional-973657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-973657 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1222 00:43:20.151591 1463694 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 00:43:20.155189 1463694 out.go:203] 
	W1222 00:43:20.158235 1463694 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1222 00:43:20.161142 1463694 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage (0.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd (0.71s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd (0.71s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd (2.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 ssh -n functional-973657 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 cp functional-973657:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelCpCm3337385621/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 ssh -n functional-973657 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 ssh -n functional-973657 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd (2.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync (0.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/1396864/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 ssh "sudo cat /etc/test/nested/copy/1396864/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync (0.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync (1.66s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/1396864.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 ssh "sudo cat /etc/ssl/certs/1396864.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/1396864.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 ssh "sudo cat /usr/share/ca-certificates/1396864.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/13968642.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 ssh "sudo cat /etc/ssl/certs/13968642.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/13968642.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 ssh "sudo cat /usr/share/ca-certificates/13968642.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync (1.66s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled (0.57s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-973657 ssh "sudo systemctl is-active docker": exit status 1 (272.42467ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-973657 ssh "sudo systemctl is-active crio": exit status 1 (293.22391ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled (0.57s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License (0.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License (0.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-973657 tunnel --alsologtostderr]
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-973657 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: exit status 103
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "369.476387ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "57.331161ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "324.624345ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "58.722026ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port (1.82s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port
functional_test_mount_test.go:219: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-973657 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun4291673907/001:/mount-9p --alsologtostderr -v=1 --port 44921]
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:249: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-973657 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (335.555731ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:263: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 ssh -- ls -la /mount-9p
functional_test_mount_test.go:267: guest mount directory contents
total 0
functional_test_mount_test.go:269: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-973657 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun4291673907/001:/mount-9p --alsologtostderr -v=1 --port 44921] ...
functional_test_mount_test.go:270: reading mount text
functional_test_mount_test.go:284: done reading mount text
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-973657 ssh "sudo umount -f /mount-9p": exit status 1 (265.994726ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:238: "out/minikube-linux-arm64 -p functional-973657 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:240: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-973657 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun4291673907/001:/mount-9p --alsologtostderr -v=1 --port 44921] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port (1.82s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup (2.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-973657 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun2375414462/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-973657 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun2375414462/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-973657 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun2375414462/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-973657 ssh "findmnt -T" /mount1: exit status 1 (570.57731ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 ssh "findmnt -T" /mount2
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 ssh "findmnt -T" /mount3
functional_test_mount_test.go:376: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-973657 --kill=true
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-973657 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun2375414462/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-973657 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun2375414462/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-973657 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun2375414462/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup (2.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components (0.51s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components (0.51s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-973657 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-rc.1
registry.k8s.io/kube-proxy:v1.35.0-rc.1
registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
registry.k8s.io/kube-apiserver:v1.35.0-rc.1
registry.k8s.io/etcd:3.6.6-0
registry.k8s.io/coredns/coredns:v1.13.1
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/minikube-local-cache-test:functional-973657
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:functional-973657
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-973657 image ls --format short --alsologtostderr:
I1222 00:43:32.847629 1465918 out.go:360] Setting OutFile to fd 1 ...
I1222 00:43:32.847791 1465918 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1222 00:43:32.847807 1465918 out.go:374] Setting ErrFile to fd 2...
I1222 00:43:32.847814 1465918 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1222 00:43:32.848074 1465918 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1395000/.minikube/bin
I1222 00:43:32.848684 1465918 config.go:182] Loaded profile config "functional-973657": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
I1222 00:43:32.848812 1465918 config.go:182] Loaded profile config "functional-973657": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
I1222 00:43:32.849314 1465918 cli_runner.go:164] Run: docker container inspect functional-973657 --format={{.State.Status}}
I1222 00:43:32.866952 1465918 ssh_runner.go:195] Run: systemctl --version
I1222 00:43:32.867012 1465918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
I1222 00:43:32.884353 1465918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38390 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/functional-973657/id_rsa Username:docker}
I1222 00:43:32.980881 1465918 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-973657 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/library/minikube-local-cache-test │ functional-973657  │ sha256:082049 │ 992B   │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                 │ sha256:ba04bb │ 8.03MB │
│ registry.k8s.io/kube-controller-manager     │ v1.35.0-rc.1       │ sha256:a34b34 │ 20.7MB │
│ registry.k8s.io/pause                       │ latest             │ sha256:8cb209 │ 71.3kB │
│ registry.k8s.io/kube-apiserver              │ v1.35.0-rc.1       │ sha256:3c6ba2 │ 24.7MB │
│ registry.k8s.io/kube-proxy                  │ v1.35.0-rc.1       │ sha256:7e3ace │ 22.4MB │
│ registry.k8s.io/pause                       │ 3.3                │ sha256:3d1873 │ 249kB  │
│ docker.io/kindest/kindnetd                  │ v20250512-df8de77b │ sha256:b1a8c6 │ 40.6MB │
│ localhost/my-image                          │ functional-973657  │ sha256:233ea7 │ 831kB  │
│ registry.k8s.io/etcd                        │ 3.6.6-0            │ sha256:271e49 │ 21.7MB │
│ registry.k8s.io/kube-scheduler              │ v1.35.0-rc.1       │ sha256:abca4d │ 15.4MB │
│ registry.k8s.io/pause                       │ 3.1                │ sha256:8057e0 │ 262kB  │
│ docker.io/kicbase/echo-server               │ functional-973657  │ sha256:ce2d2c │ 2.17MB │
│ registry.k8s.io/coredns/coredns             │ v1.13.1            │ sha256:e08f4d │ 21.2MB │
│ registry.k8s.io/pause                       │ 3.10.1             │ sha256:d7b100 │ 268kB  │
└─────────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-973657 image ls --format table --alsologtostderr:
I1222 00:43:37.129637 1466308 out.go:360] Setting OutFile to fd 1 ...
I1222 00:43:37.129792 1466308 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1222 00:43:37.129816 1466308 out.go:374] Setting ErrFile to fd 2...
I1222 00:43:37.129841 1466308 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1222 00:43:37.130146 1466308 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1395000/.minikube/bin
I1222 00:43:37.130831 1466308 config.go:182] Loaded profile config "functional-973657": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
I1222 00:43:37.131009 1466308 config.go:182] Loaded profile config "functional-973657": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
I1222 00:43:37.131573 1466308 cli_runner.go:164] Run: docker container inspect functional-973657 --format={{.State.Status}}
I1222 00:43:37.151696 1466308 ssh_runner.go:195] Run: systemctl --version
I1222 00:43:37.151765 1466308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
I1222 00:43:37.169779 1466308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38390 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/functional-973657/id_rsa Username:docker}
I1222 00:43:37.269031 1466308 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-973657 image ls --format json --alsologtostderr:
[{"id":"sha256:233ea769d234c4331d518d7a6819030294f0233ae21105553a0d59ca79c33439","repoDigests":[],"repoTags":["localhost/my-image:functional-973657"],"size":"830601"},{"id":"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57","repoDigests":["registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890"],"repoTags":["registry.k8s.io/etcd:3.6.6-0"],"size":"21749640"},{"id":"sha256:7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e","repoDigests":["registry.k8s.io/kube-proxy@sha256:bdd1fa8b53558a2e1967379a36b085c93faf15581e5fa9f212baf679d89c5bb5"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-rc.1"],"size":"22432301"},{"id":"sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"40636774"},{"id":"sha256:e08f4d9d2e6ede8185064c13b41f8eeee
95b609c0ca93b6fe7509fe527c907cf","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"21168808"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-973657"],"size":"2173567"},{"id":"sha256:082049fa7835e23c46c09f80be520b2afb0d7d032957be9d461df564fef85ac1","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-973657"],"size":"992"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:
57ab0f75f58d99f4be7bff7bdda015fcbf1b7c20e58ba2722c8c39f751dc8c98"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-rc.1"],"size":"20672157"},{"id":"sha256:abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde","repoDigests":["registry.k8s.io/kube-scheduler@sha256:8155e3db27c7081abfc8eb5da70820cfeaf0bba7449e45360e8220e670f417d3"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-rc.1"],"size":"15405535"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54","repoDigests":["registry.k8s.io/kube-apiserver@sha256:58367b5c0428495c0c12411fa7a018f5d40fe57307b85d8935b1ed35706ff7ee"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-rc.1"],"size":"24692223"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},
{"id":"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"267939"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-973657 image ls --format json --alsologtostderr:
I1222 00:43:36.901432 1466272 out.go:360] Setting OutFile to fd 1 ...
I1222 00:43:36.901710 1466272 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1222 00:43:36.901759 1466272 out.go:374] Setting ErrFile to fd 2...
I1222 00:43:36.901779 1466272 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1222 00:43:36.902187 1466272 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1395000/.minikube/bin
I1222 00:43:36.902973 1466272 config.go:182] Loaded profile config "functional-973657": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
I1222 00:43:36.903197 1466272 config.go:182] Loaded profile config "functional-973657": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
I1222 00:43:36.903849 1466272 cli_runner.go:164] Run: docker container inspect functional-973657 --format={{.State.Status}}
I1222 00:43:36.923857 1466272 ssh_runner.go:195] Run: systemctl --version
I1222 00:43:36.923917 1466272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
I1222 00:43:36.941888 1466272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38390 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/functional-973657/id_rsa Username:docker}
I1222 00:43:37.041646 1466272 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-973657 image ls --format yaml --alsologtostderr:
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-973657
size: "2173567"
- id: sha256:082049fa7835e23c46c09f80be520b2afb0d7d032957be9d461df564fef85ac1
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-973657
size: "992"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57
repoDigests:
- registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890
repoTags:
- registry.k8s.io/etcd:3.6.6-0
size: "21749640"
- id: sha256:3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:58367b5c0428495c0c12411fa7a018f5d40fe57307b85d8935b1ed35706ff7ee
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-rc.1
size: "24692223"
- id: sha256:7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e
repoDigests:
- registry.k8s.io/kube-proxy@sha256:bdd1fa8b53558a2e1967379a36b085c93faf15581e5fa9f212baf679d89c5bb5
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-rc.1
size: "22432301"
- id: sha256:abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:8155e3db27c7081abfc8eb5da70820cfeaf0bba7449e45360e8220e670f417d3
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-rc.1
size: "15405535"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "40636774"
- id: sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "21168808"
- id: sha256:a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:57ab0f75f58d99f4be7bff7bdda015fcbf1b7c20e58ba2722c8c39f751dc8c98
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
size: "20672157"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "267939"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-973657 image ls --format yaml --alsologtostderr:
I1222 00:43:33.066676 1465953 out.go:360] Setting OutFile to fd 1 ...
I1222 00:43:33.066787 1465953 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1222 00:43:33.066798 1465953 out.go:374] Setting ErrFile to fd 2...
I1222 00:43:33.066804 1465953 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1222 00:43:33.067054 1465953 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1395000/.minikube/bin
I1222 00:43:33.067723 1465953 config.go:182] Loaded profile config "functional-973657": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
I1222 00:43:33.067861 1465953 config.go:182] Loaded profile config "functional-973657": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
I1222 00:43:33.068396 1465953 cli_runner.go:164] Run: docker container inspect functional-973657 --format={{.State.Status}}
I1222 00:43:33.086265 1465953 ssh_runner.go:195] Run: systemctl --version
I1222 00:43:33.086327 1465953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
I1222 00:43:33.103703 1465953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38390 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/functional-973657/id_rsa Username:docker}
I1222 00:43:33.206335 1465953 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild (3.61s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-973657 ssh pgrep buildkitd: exit status 1 (264.160541ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 image build -t localhost/my-image:functional-973657 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-973657 image build -t localhost/my-image:functional-973657 testdata/build --alsologtostderr: (3.121255198s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-973657 image build -t localhost/my-image:functional-973657 testdata/build --alsologtostderr:
I1222 00:43:33.569141 1466059 out.go:360] Setting OutFile to fd 1 ...
I1222 00:43:33.569282 1466059 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1222 00:43:33.569316 1466059 out.go:374] Setting ErrFile to fd 2...
I1222 00:43:33.569328 1466059 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1222 00:43:33.569598 1466059 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1395000/.minikube/bin
I1222 00:43:33.570564 1466059 config.go:182] Loaded profile config "functional-973657": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
I1222 00:43:33.571232 1466059 config.go:182] Loaded profile config "functional-973657": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
I1222 00:43:33.571820 1466059 cli_runner.go:164] Run: docker container inspect functional-973657 --format={{.State.Status}}
I1222 00:43:33.589400 1466059 ssh_runner.go:195] Run: systemctl --version
I1222 00:43:33.589460 1466059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
I1222 00:43:33.606708 1466059 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38390 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/functional-973657/id_rsa Username:docker}
I1222 00:43:33.709013 1466059 build_images.go:162] Building image from path: /tmp/build.2740162774.tar
I1222 00:43:33.709084 1466059 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1222 00:43:33.717247 1466059 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2740162774.tar
I1222 00:43:33.721042 1466059 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2740162774.tar: stat -c "%s %y" /var/lib/minikube/build/build.2740162774.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2740162774.tar': No such file or directory
I1222 00:43:33.721075 1466059 ssh_runner.go:362] scp /tmp/build.2740162774.tar --> /var/lib/minikube/build/build.2740162774.tar (3072 bytes)
I1222 00:43:33.739790 1466059 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2740162774
I1222 00:43:33.747749 1466059 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2740162774 -xf /var/lib/minikube/build/build.2740162774.tar
I1222 00:43:33.755757 1466059 containerd.go:394] Building image: /var/lib/minikube/build/build.2740162774
I1222 00:43:33.755832 1466059 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2740162774 --local dockerfile=/var/lib/minikube/build/build.2740162774 --output type=image,name=localhost/my-image:functional-973657
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.6s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:64be65caa8fc904d32ac7be1059a5c78e2cd5abd8090239778359274e2f06102
#8 exporting manifest sha256:64be65caa8fc904d32ac7be1059a5c78e2cd5abd8090239778359274e2f06102 0.0s done
#8 exporting config sha256:233ea769d234c4331d518d7a6819030294f0233ae21105553a0d59ca79c33439 0.0s done
#8 naming to localhost/my-image:functional-973657 done
#8 DONE 0.2s
I1222 00:43:36.596813 1466059 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2740162774 --local dockerfile=/var/lib/minikube/build/build.2740162774 --output type=image,name=localhost/my-image:functional-973657: (2.840951584s)
I1222 00:43:36.596883 1466059 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2740162774
I1222 00:43:36.605802 1466059 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2740162774.tar
I1222 00:43:36.614208 1466059 build_images.go:218] Built localhost/my-image:functional-973657 from /tmp/build.2740162774.tar
I1222 00:43:36.614243 1466059 build_images.go:134] succeeded building to: functional-973657
I1222 00:43:36.614248 1466059 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild (3.61s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-973657
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon (1.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 image load --daemon kicbase/echo-server:functional-973657 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon (1.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon (1.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 image load --daemon kicbase/echo-server:functional-973657 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon (1.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon (1.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-973657
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 image load --daemon kicbase/echo-server:functional-973657 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon (1.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile (0.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 image save kicbase/echo-server:functional-973657 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile (0.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 image rm kicbase/echo-server:functional-973657 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile (0.68s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile (0.68s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon (0.35s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-973657
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 image save --daemon kicbase/echo-server:functional-973657 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-973657
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon (0.35s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-973657 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-973657
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-973657
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-973657
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (143.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-522123 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E1222 00:45:56.760704 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:45:56.765953 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:45:56.776245 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:45:56.796471 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:45:56.836756 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:45:56.917081 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:45:57.077482 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:45:57.398133 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:45:58.038805 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:45:59.319309 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:46:01.880152 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:46:07.001180 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:46:17.241429 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:46:29.153659 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/addons-984861/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:46:37.722553 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:47:18.683611 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-522123 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (2m22.918504261s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-522123 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (143.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-522123 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-522123 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-522123 kubectl -- rollout status deployment/busybox: (4.293197889s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-522123 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-522123 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-522123 kubectl -- exec busybox-7b57f96db7-7vxth -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-522123 kubectl -- exec busybox-7b57f96db7-c9shz -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-522123 kubectl -- exec busybox-7b57f96db7-hvgkc -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-522123 kubectl -- exec busybox-7b57f96db7-7vxth -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-522123 kubectl -- exec busybox-7b57f96db7-c9shz -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-522123 kubectl -- exec busybox-7b57f96db7-hvgkc -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-522123 kubectl -- exec busybox-7b57f96db7-7vxth -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-522123 kubectl -- exec busybox-7b57f96db7-c9shz -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-522123 kubectl -- exec busybox-7b57f96db7-hvgkc -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-522123 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-522123 kubectl -- exec busybox-7b57f96db7-7vxth -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-522123 kubectl -- exec busybox-7b57f96db7-7vxth -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-522123 kubectl -- exec busybox-7b57f96db7-c9shz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-522123 kubectl -- exec busybox-7b57f96db7-c9shz -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-522123 kubectl -- exec busybox-7b57f96db7-hvgkc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-522123 kubectl -- exec busybox-7b57f96db7-hvgkc -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (30.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-522123 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-522123 node add --alsologtostderr -v 5: (29.473789435s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-522123 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-522123 status --alsologtostderr -v 5: (1.096167205s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (30.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-522123 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.049855335s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-522123 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-522123 status --output json --alsologtostderr -v 5: (1.096551127s)
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-522123 cp testdata/cp-test.txt ha-522123:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-522123 ssh -n ha-522123 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-522123 cp ha-522123:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4068047390/001/cp-test_ha-522123.txt
E1222 00:48:07.826277 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-722318/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-522123 ssh -n ha-522123 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-522123 cp ha-522123:/home/docker/cp-test.txt ha-522123-m02:/home/docker/cp-test_ha-522123_ha-522123-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-522123 ssh -n ha-522123 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-522123 ssh -n ha-522123-m02 "sudo cat /home/docker/cp-test_ha-522123_ha-522123-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-522123 cp ha-522123:/home/docker/cp-test.txt ha-522123-m03:/home/docker/cp-test_ha-522123_ha-522123-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-522123 ssh -n ha-522123 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-522123 ssh -n ha-522123-m03 "sudo cat /home/docker/cp-test_ha-522123_ha-522123-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-522123 cp ha-522123:/home/docker/cp-test.txt ha-522123-m04:/home/docker/cp-test_ha-522123_ha-522123-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-522123 ssh -n ha-522123 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-522123 ssh -n ha-522123-m04 "sudo cat /home/docker/cp-test_ha-522123_ha-522123-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-522123 cp testdata/cp-test.txt ha-522123-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-522123 ssh -n ha-522123-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-522123 cp ha-522123-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4068047390/001/cp-test_ha-522123-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-522123 ssh -n ha-522123-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-522123 cp ha-522123-m02:/home/docker/cp-test.txt ha-522123:/home/docker/cp-test_ha-522123-m02_ha-522123.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-522123 ssh -n ha-522123-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-522123 ssh -n ha-522123 "sudo cat /home/docker/cp-test_ha-522123-m02_ha-522123.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-522123 cp ha-522123-m02:/home/docker/cp-test.txt ha-522123-m03:/home/docker/cp-test_ha-522123-m02_ha-522123-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-522123 ssh -n ha-522123-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-522123 ssh -n ha-522123-m03 "sudo cat /home/docker/cp-test_ha-522123-m02_ha-522123-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-522123 cp ha-522123-m02:/home/docker/cp-test.txt ha-522123-m04:/home/docker/cp-test_ha-522123-m02_ha-522123-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-522123 ssh -n ha-522123-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-522123 ssh -n ha-522123-m04 "sudo cat /home/docker/cp-test_ha-522123-m02_ha-522123-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-522123 cp testdata/cp-test.txt ha-522123-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-522123 ssh -n ha-522123-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-522123 cp ha-522123-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4068047390/001/cp-test_ha-522123-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-522123 ssh -n ha-522123-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-522123 cp ha-522123-m03:/home/docker/cp-test.txt ha-522123:/home/docker/cp-test_ha-522123-m03_ha-522123.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-522123 ssh -n ha-522123-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-522123 ssh -n ha-522123 "sudo cat /home/docker/cp-test_ha-522123-m03_ha-522123.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-522123 cp ha-522123-m03:/home/docker/cp-test.txt ha-522123-m02:/home/docker/cp-test_ha-522123-m03_ha-522123-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-522123 ssh -n ha-522123-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-522123 ssh -n ha-522123-m02 "sudo cat /home/docker/cp-test_ha-522123-m03_ha-522123-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-522123 cp ha-522123-m03:/home/docker/cp-test.txt ha-522123-m04:/home/docker/cp-test_ha-522123-m03_ha-522123-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-522123 ssh -n ha-522123-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-522123 ssh -n ha-522123-m04 "sudo cat /home/docker/cp-test_ha-522123-m03_ha-522123-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-522123 cp testdata/cp-test.txt ha-522123-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-522123 ssh -n ha-522123-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-522123 cp ha-522123-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4068047390/001/cp-test_ha-522123-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-522123 ssh -n ha-522123-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-522123 cp ha-522123-m04:/home/docker/cp-test.txt ha-522123:/home/docker/cp-test_ha-522123-m04_ha-522123.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-522123 ssh -n ha-522123-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-522123 ssh -n ha-522123 "sudo cat /home/docker/cp-test_ha-522123-m04_ha-522123.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-522123 cp ha-522123-m04:/home/docker/cp-test.txt ha-522123-m02:/home/docker/cp-test_ha-522123-m04_ha-522123-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-522123 ssh -n ha-522123-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-522123 ssh -n ha-522123-m02 "sudo cat /home/docker/cp-test_ha-522123-m04_ha-522123-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-522123 cp ha-522123-m04:/home/docker/cp-test.txt ha-522123-m03:/home/docker/cp-test_ha-522123-m04_ha-522123-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-522123 ssh -n ha-522123-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-522123 ssh -n ha-522123-m03 "sudo cat /home/docker/cp-test_ha-522123-m04_ha-522123-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (13.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-522123 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-522123 node stop m02 --alsologtostderr -v 5: (12.203950481s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-522123 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-522123 status --alsologtostderr -v 5: exit status 7 (809.667869ms)

                                                
                                                
-- stdout --
	ha-522123
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-522123-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-522123-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-522123-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1222 00:48:38.231978 1483623 out.go:360] Setting OutFile to fd 1 ...
	I1222 00:48:38.232189 1483623 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:48:38.232238 1483623 out.go:374] Setting ErrFile to fd 2...
	I1222 00:48:38.232261 1483623 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:48:38.232694 1483623 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1395000/.minikube/bin
	I1222 00:48:38.233003 1483623 out.go:368] Setting JSON to false
	I1222 00:48:38.233084 1483623 mustload.go:66] Loading cluster: ha-522123
	I1222 00:48:38.233188 1483623 notify.go:221] Checking for updates...
	I1222 00:48:38.234473 1483623 config.go:182] Loaded profile config "ha-522123": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1222 00:48:38.234534 1483623 status.go:174] checking status of ha-522123 ...
	I1222 00:48:38.235234 1483623 cli_runner.go:164] Run: docker container inspect ha-522123 --format={{.State.Status}}
	I1222 00:48:38.257482 1483623 status.go:371] ha-522123 host status = "Running" (err=<nil>)
	I1222 00:48:38.257505 1483623 host.go:66] Checking if "ha-522123" exists ...
	I1222 00:48:38.257822 1483623 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-522123
	I1222 00:48:38.290057 1483623 host.go:66] Checking if "ha-522123" exists ...
	I1222 00:48:38.290409 1483623 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 00:48:38.290459 1483623 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-522123
	I1222 00:48:38.309992 1483623 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38395 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/ha-522123/id_rsa Username:docker}
	I1222 00:48:38.412281 1483623 ssh_runner.go:195] Run: systemctl --version
	I1222 00:48:38.419178 1483623 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 00:48:38.432944 1483623 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 00:48:38.497437 1483623 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:69 OomKillDisable:true NGoroutines:72 SystemTime:2025-12-22 00:48:38.481824129 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 00:48:38.498003 1483623 kubeconfig.go:125] found "ha-522123" server: "https://192.168.49.254:8443"
	I1222 00:48:38.498047 1483623 api_server.go:166] Checking apiserver status ...
	I1222 00:48:38.498141 1483623 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:48:38.514943 1483623 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1493/cgroup
	I1222 00:48:38.524389 1483623 api_server.go:182] apiserver freezer: "6:freezer:/docker/b8cb78f7a56655da491690cf2f87de05749c114fcf8071ca2d92278cf60ded07/kubepods/burstable/pod42a42ec8ab4b9dcc327e74fdb38bcf8b/ae21b8340a5f259b61ba50542cf18d69961a8112579e02144b982aeee256f916"
	I1222 00:48:38.524471 1483623 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/b8cb78f7a56655da491690cf2f87de05749c114fcf8071ca2d92278cf60ded07/kubepods/burstable/pod42a42ec8ab4b9dcc327e74fdb38bcf8b/ae21b8340a5f259b61ba50542cf18d69961a8112579e02144b982aeee256f916/freezer.state
	I1222 00:48:38.532458 1483623 api_server.go:204] freezer state: "THAWED"
	I1222 00:48:38.532492 1483623 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1222 00:48:38.541278 1483623 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1222 00:48:38.541312 1483623 status.go:463] ha-522123 apiserver status = Running (err=<nil>)
	I1222 00:48:38.541324 1483623 status.go:176] ha-522123 status: &{Name:ha-522123 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1222 00:48:38.541343 1483623 status.go:174] checking status of ha-522123-m02 ...
	I1222 00:48:38.541669 1483623 cli_runner.go:164] Run: docker container inspect ha-522123-m02 --format={{.State.Status}}
	I1222 00:48:38.560535 1483623 status.go:371] ha-522123-m02 host status = "Stopped" (err=<nil>)
	I1222 00:48:38.560566 1483623 status.go:384] host is not running, skipping remaining checks
	I1222 00:48:38.560574 1483623 status.go:176] ha-522123-m02 status: &{Name:ha-522123-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1222 00:48:38.560596 1483623 status.go:174] checking status of ha-522123-m03 ...
	I1222 00:48:38.560928 1483623 cli_runner.go:164] Run: docker container inspect ha-522123-m03 --format={{.State.Status}}
	I1222 00:48:38.580749 1483623 status.go:371] ha-522123-m03 host status = "Running" (err=<nil>)
	I1222 00:48:38.580771 1483623 host.go:66] Checking if "ha-522123-m03" exists ...
	I1222 00:48:38.581275 1483623 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-522123-m03
	I1222 00:48:38.600940 1483623 host.go:66] Checking if "ha-522123-m03" exists ...
	I1222 00:48:38.601241 1483623 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 00:48:38.601290 1483623 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-522123-m03
	I1222 00:48:38.629867 1483623 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38405 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/ha-522123-m03/id_rsa Username:docker}
	I1222 00:48:38.729320 1483623 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 00:48:38.742795 1483623 kubeconfig.go:125] found "ha-522123" server: "https://192.168.49.254:8443"
	I1222 00:48:38.742823 1483623 api_server.go:166] Checking apiserver status ...
	I1222 00:48:38.742873 1483623 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 00:48:38.759490 1483623 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1416/cgroup
	I1222 00:48:38.769564 1483623 api_server.go:182] apiserver freezer: "6:freezer:/docker/71bea4a16043a80be9632fcb2058526fb4e3e24ed45e25c9932b053001c1e2b9/kubepods/burstable/pod0c09feb73b6e429d4e4cb8fa87a422fe/c5eeacace5fffbe71823828c73f68e7ac8a2465d298674b4b3f2f8cae7720a98"
	I1222 00:48:38.769639 1483623 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/71bea4a16043a80be9632fcb2058526fb4e3e24ed45e25c9932b053001c1e2b9/kubepods/burstable/pod0c09feb73b6e429d4e4cb8fa87a422fe/c5eeacace5fffbe71823828c73f68e7ac8a2465d298674b4b3f2f8cae7720a98/freezer.state
	I1222 00:48:38.779011 1483623 api_server.go:204] freezer state: "THAWED"
	I1222 00:48:38.779042 1483623 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1222 00:48:38.788755 1483623 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1222 00:48:38.788788 1483623 status.go:463] ha-522123-m03 apiserver status = Running (err=<nil>)
	I1222 00:48:38.788797 1483623 status.go:176] ha-522123-m03 status: &{Name:ha-522123-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1222 00:48:38.788814 1483623 status.go:174] checking status of ha-522123-m04 ...
	I1222 00:48:38.789151 1483623 cli_runner.go:164] Run: docker container inspect ha-522123-m04 --format={{.State.Status}}
	I1222 00:48:38.815008 1483623 status.go:371] ha-522123-m04 host status = "Running" (err=<nil>)
	I1222 00:48:38.815033 1483623 host.go:66] Checking if "ha-522123-m04" exists ...
	I1222 00:48:38.815367 1483623 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-522123-m04
	I1222 00:48:38.838371 1483623 host.go:66] Checking if "ha-522123-m04" exists ...
	I1222 00:48:38.838680 1483623 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 00:48:38.838730 1483623 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-522123-m04
	I1222 00:48:38.858665 1483623 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38410 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/ha-522123-m04/id_rsa Username:docker}
	I1222 00:48:38.964306 1483623 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 00:48:38.980864 1483623 status.go:176] ha-522123-m04 status: &{Name:ha-522123-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (13.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (13.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-522123 node start m02 --alsologtostderr -v 5
E1222 00:48:40.604496 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-522123 node start m02 --alsologtostderr -v 5: (11.774668089s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-522123 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-522123 status --alsologtostderr -v 5: (1.457597495s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (13.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.480628461s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (98.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-522123 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-522123 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-522123 stop --alsologtostderr -v 5: (37.701988861s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-522123 start --wait true --alsologtostderr -v 5
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-522123 start --wait true --alsologtostderr -v 5: (1m0.183460849s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-522123 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (98.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-522123 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-522123 node delete m03 --alsologtostderr -v 5: (9.96771641s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-522123 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-522123 stop --alsologtostderr -v 5
E1222 00:50:56.756509 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:51:10.874460 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-722318/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-522123 stop --alsologtostderr -v 5: (36.512597697s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-522123 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-522123 status --alsologtostderr -v 5: exit status 7 (120.531421ms)

                                                
                                                
-- stdout --
	ha-522123
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-522123-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-522123-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1222 00:51:21.055002 1498571 out.go:360] Setting OutFile to fd 1 ...
	I1222 00:51:21.055116 1498571 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:51:21.055127 1498571 out.go:374] Setting ErrFile to fd 2...
	I1222 00:51:21.055133 1498571 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 00:51:21.055387 1498571 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1395000/.minikube/bin
	I1222 00:51:21.055570 1498571 out.go:368] Setting JSON to false
	I1222 00:51:21.055605 1498571 mustload.go:66] Loading cluster: ha-522123
	I1222 00:51:21.055713 1498571 notify.go:221] Checking for updates...
	I1222 00:51:21.056025 1498571 config.go:182] Loaded profile config "ha-522123": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1222 00:51:21.056050 1498571 status.go:174] checking status of ha-522123 ...
	I1222 00:51:21.056536 1498571 cli_runner.go:164] Run: docker container inspect ha-522123 --format={{.State.Status}}
	I1222 00:51:21.075504 1498571 status.go:371] ha-522123 host status = "Stopped" (err=<nil>)
	I1222 00:51:21.075534 1498571 status.go:384] host is not running, skipping remaining checks
	I1222 00:51:21.075540 1498571 status.go:176] ha-522123 status: &{Name:ha-522123 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1222 00:51:21.075576 1498571 status.go:174] checking status of ha-522123-m02 ...
	I1222 00:51:21.075875 1498571 cli_runner.go:164] Run: docker container inspect ha-522123-m02 --format={{.State.Status}}
	I1222 00:51:21.110686 1498571 status.go:371] ha-522123-m02 host status = "Stopped" (err=<nil>)
	I1222 00:51:21.110710 1498571 status.go:384] host is not running, skipping remaining checks
	I1222 00:51:21.110718 1498571 status.go:176] ha-522123-m02 status: &{Name:ha-522123-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1222 00:51:21.110736 1498571 status.go:174] checking status of ha-522123-m04 ...
	I1222 00:51:21.111040 1498571 cli_runner.go:164] Run: docker container inspect ha-522123-m04 --format={{.State.Status}}
	I1222 00:51:21.128815 1498571 status.go:371] ha-522123-m04 host status = "Stopped" (err=<nil>)
	I1222 00:51:21.128838 1498571 status.go:384] host is not running, skipping remaining checks
	I1222 00:51:21.128846 1498571 status.go:176] ha-522123-m04 status: &{Name:ha-522123-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (61.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-522123 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E1222 00:51:24.444812 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:51:29.153857 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/addons-984861/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-522123 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (1m0.87710317s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-522123 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (61.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (85.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-522123 node add --control-plane --alsologtostderr -v 5
E1222 00:53:07.827265 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-722318/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-522123 node add --control-plane --alsologtostderr -v 5: (1m23.918615984s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-522123 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-522123 status --alsologtostderr -v 5: (1.120578433s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (85.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.128971465s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.13s)

                                                
                                    
x
+
TestJSONOutput/start/Command (51.35s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-894662 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-894662 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (51.339490851s)
--- PASS: TestJSONOutput/start/Command (51.35s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.74s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-894662 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.74s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-894662 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.05s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-894662 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-894662 --output=json --user=testUser: (6.046266266s)
--- PASS: TestJSONOutput/stop/Command (6.05s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.25s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-400842 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-400842 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (96.845359ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"8e3d66c4-52a3-4563-a4d2-e213931db1dc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-400842] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"2e6b6fd7-a13c-42bb-8ebb-12934e449655","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22179"}}
	{"specversion":"1.0","id":"7e987cda-34e1-4ef3-a5e6-7a0104fedc17","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"cac87bff-4123-4aa0-b82a-f82a69b73a15","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22179-1395000/kubeconfig"}}
	{"specversion":"1.0","id":"b8d4ef4b-6c7c-416d-970e-3fcc8bbbda9b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1395000/.minikube"}}
	{"specversion":"1.0","id":"53239568-fc3d-43bc-88a0-d39f1f6cb689","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"d9463e12-7b4c-4e93-8bc4-7296aaa7c502","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"f1eae7a6-39e9-4546-a3e3-9ed813025bb4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-400842" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-400842
--- PASS: TestErrorJSONOutput (0.25s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (35.19s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-931070 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-931070 --network=: (32.875726145s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-931070" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-931070
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-931070: (2.276528676s)
--- PASS: TestKicCustomNetwork/create_custom_network (35.19s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (36.62s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-849068 --network=bridge
E1222 00:55:56.762221 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-849068 --network=bridge: (34.50900906s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-849068" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-849068
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-849068: (2.078762906s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (36.62s)

                                                
                                    
x
+
TestKicExistingNetwork (35.26s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1222 00:56:14.106550 1396864 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1222 00:56:14.122624 1396864 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1222 00:56:14.122700 1396864 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1222 00:56:14.122718 1396864 cli_runner.go:164] Run: docker network inspect existing-network
W1222 00:56:14.140138 1396864 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1222 00:56:14.140170 1396864 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1222 00:56:14.140184 1396864 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1222 00:56:14.140302 1396864 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1222 00:56:14.159509 1396864 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-4235b17e4b35 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:fe:6e:e2:e5:a2:3e} reservation:<nil>}
I1222 00:56:14.159893 1396864 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4000411680}
I1222 00:56:14.159920 1396864 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1222 00:56:14.159975 1396864 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1222 00:56:14.220998 1396864 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-025542 --network=existing-network
E1222 00:56:29.153509 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/addons-984861/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-025542 --network=existing-network: (32.879122967s)
helpers_test.go:176: Cleaning up "existing-network-025542" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-025542
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-025542: (2.237375617s)
I1222 00:56:49.353829 1396864 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (35.26s)

                                                
                                    
x
+
TestKicCustomSubnet (38.32s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-988452 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-988452 --subnet=192.168.60.0/24: (36.046405008s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-988452 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:176: Cleaning up "custom-subnet-988452" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-988452
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-988452: (2.245252867s)
--- PASS: TestKicCustomSubnet (38.32s)

                                                
                                    
x
+
TestKicStaticIP (36.75s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-954645 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-954645 --static-ip=192.168.200.200: (34.298934631s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-954645 ip
helpers_test.go:176: Cleaning up "static-ip-954645" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-954645
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-954645: (2.26882759s)
--- PASS: TestKicStaticIP (36.75s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (71.49s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-811827 --driver=docker  --container-runtime=containerd
E1222 00:58:07.826549 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-722318/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-811827 --driver=docker  --container-runtime=containerd: (32.837281815s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-814338 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-814338 --driver=docker  --container-runtime=containerd: (32.672465734s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-811827
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-814338
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:176: Cleaning up "second-814338" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p second-814338
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p second-814338: (2.177112919s)
helpers_test.go:176: Cleaning up "first-811827" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p first-811827
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p first-811827: (2.354780976s)
--- PASS: TestMinikubeProfile (71.49s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.64s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-962812 --memory=3072 --mount-string /tmp/TestMountStartserial2651761555/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-962812 --memory=3072 --mount-string /tmp/TestMountStartserial2651761555/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.641017411s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.64s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-962812 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.86s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-964642 --memory=3072 --mount-string /tmp/TestMountStartserial2651761555/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-964642 --memory=3072 --mount-string /tmp/TestMountStartserial2651761555/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (4.863544803s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.86s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-964642 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.69s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-962812 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-962812 --alsologtostderr -v=5: (1.693409036s)
--- PASS: TestMountStart/serial/DeleteFirst (1.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-964642 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-964642
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-964642: (1.300052078s)
--- PASS: TestMountStart/serial/Stop (1.30s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.47s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-964642
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-964642: (6.474018224s)
--- PASS: TestMountStart/serial/RestartStopped (7.47s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-964642 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (78.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-195089 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
E1222 01:00:56.757232 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-195089 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m18.312316446s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-195089 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (78.84s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-195089 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-195089 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-195089 -- rollout status deployment/busybox: (4.164514263s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-195089 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-195089 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-195089 -- exec busybox-7b57f96db7-n6khg -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-195089 -- exec busybox-7b57f96db7-q85zt -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-195089 -- exec busybox-7b57f96db7-n6khg -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-195089 -- exec busybox-7b57f96db7-q85zt -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-195089 -- exec busybox-7b57f96db7-n6khg -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-195089 -- exec busybox-7b57f96db7-q85zt -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.04s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-195089 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-195089 -- exec busybox-7b57f96db7-n6khg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-195089 -- exec busybox-7b57f96db7-n6khg -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-195089 -- exec busybox-7b57f96db7-q85zt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-195089 -- exec busybox-7b57f96db7-q85zt -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.02s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (29.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-195089 -v=5 --alsologtostderr
E1222 01:01:12.210632 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/addons-984861/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:01:29.154477 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/addons-984861/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-195089 -v=5 --alsologtostderr: (28.686383095s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-195089 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (29.37s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-195089 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.73s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-195089 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-195089 cp testdata/cp-test.txt multinode-195089:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-195089 ssh -n multinode-195089 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-195089 cp multinode-195089:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile861151036/001/cp-test_multinode-195089.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-195089 ssh -n multinode-195089 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-195089 cp multinode-195089:/home/docker/cp-test.txt multinode-195089-m02:/home/docker/cp-test_multinode-195089_multinode-195089-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-195089 ssh -n multinode-195089 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-195089 ssh -n multinode-195089-m02 "sudo cat /home/docker/cp-test_multinode-195089_multinode-195089-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-195089 cp multinode-195089:/home/docker/cp-test.txt multinode-195089-m03:/home/docker/cp-test_multinode-195089_multinode-195089-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-195089 ssh -n multinode-195089 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-195089 ssh -n multinode-195089-m03 "sudo cat /home/docker/cp-test_multinode-195089_multinode-195089-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-195089 cp testdata/cp-test.txt multinode-195089-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-195089 ssh -n multinode-195089-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-195089 cp multinode-195089-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile861151036/001/cp-test_multinode-195089-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-195089 ssh -n multinode-195089-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-195089 cp multinode-195089-m02:/home/docker/cp-test.txt multinode-195089:/home/docker/cp-test_multinode-195089-m02_multinode-195089.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-195089 ssh -n multinode-195089-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-195089 ssh -n multinode-195089 "sudo cat /home/docker/cp-test_multinode-195089-m02_multinode-195089.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-195089 cp multinode-195089-m02:/home/docker/cp-test.txt multinode-195089-m03:/home/docker/cp-test_multinode-195089-m02_multinode-195089-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-195089 ssh -n multinode-195089-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-195089 ssh -n multinode-195089-m03 "sudo cat /home/docker/cp-test_multinode-195089-m02_multinode-195089-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-195089 cp testdata/cp-test.txt multinode-195089-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-195089 ssh -n multinode-195089-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-195089 cp multinode-195089-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile861151036/001/cp-test_multinode-195089-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-195089 ssh -n multinode-195089-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-195089 cp multinode-195089-m03:/home/docker/cp-test.txt multinode-195089:/home/docker/cp-test_multinode-195089-m03_multinode-195089.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-195089 ssh -n multinode-195089-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-195089 ssh -n multinode-195089 "sudo cat /home/docker/cp-test_multinode-195089-m03_multinode-195089.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-195089 cp multinode-195089-m03:/home/docker/cp-test.txt multinode-195089-m02:/home/docker/cp-test_multinode-195089-m03_multinode-195089-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-195089 ssh -n multinode-195089-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-195089 ssh -n multinode-195089-m02 "sudo cat /home/docker/cp-test_multinode-195089-m03_multinode-195089-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.52s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-195089 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-195089 node stop m03: (1.307178322s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-195089 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-195089 status: exit status 7 (560.423206ms)

                                                
                                                
-- stdout --
	multinode-195089
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-195089-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-195089-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-195089 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-195089 status --alsologtostderr: exit status 7 (552.107549ms)

                                                
                                                
-- stdout --
	multinode-195089
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-195089-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-195089-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1222 01:01:52.480279 1552232 out.go:360] Setting OutFile to fd 1 ...
	I1222 01:01:52.480768 1552232 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:01:52.480795 1552232 out.go:374] Setting ErrFile to fd 2...
	I1222 01:01:52.480801 1552232 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:01:52.481213 1552232 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1395000/.minikube/bin
	I1222 01:01:52.481458 1552232 out.go:368] Setting JSON to false
	I1222 01:01:52.481489 1552232 mustload.go:66] Loading cluster: multinode-195089
	I1222 01:01:52.481545 1552232 notify.go:221] Checking for updates...
	I1222 01:01:52.482133 1552232 config.go:182] Loaded profile config "multinode-195089": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1222 01:01:52.482610 1552232 status.go:174] checking status of multinode-195089 ...
	I1222 01:01:52.483591 1552232 cli_runner.go:164] Run: docker container inspect multinode-195089 --format={{.State.Status}}
	I1222 01:01:52.502787 1552232 status.go:371] multinode-195089 host status = "Running" (err=<nil>)
	I1222 01:01:52.502817 1552232 host.go:66] Checking if "multinode-195089" exists ...
	I1222 01:01:52.503125 1552232 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-195089
	I1222 01:01:52.533657 1552232 host.go:66] Checking if "multinode-195089" exists ...
	I1222 01:01:52.533984 1552232 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 01:01:52.534029 1552232 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-195089
	I1222 01:01:52.552622 1552232 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38515 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/multinode-195089/id_rsa Username:docker}
	I1222 01:01:52.647967 1552232 ssh_runner.go:195] Run: systemctl --version
	I1222 01:01:52.654478 1552232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 01:01:52.668286 1552232 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:01:52.741869 1552232 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-22 01:01:52.732175625 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:01:52.742513 1552232 kubeconfig.go:125] found "multinode-195089" server: "https://192.168.67.2:8443"
	I1222 01:01:52.742554 1552232 api_server.go:166] Checking apiserver status ...
	I1222 01:01:52.742601 1552232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 01:01:52.754977 1552232 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1405/cgroup
	I1222 01:01:52.763941 1552232 api_server.go:182] apiserver freezer: "6:freezer:/docker/d5d7912f28eb1c0d675cce774b2ea1157859f0c9883d07d688f4d2a85f789167/kubepods/burstable/podacc13da2e5fa01e94d0ff7d75cf96842/e96b9b93f2f437ad5f40a5cd204ed999a37cc15f45e655f83c9e47e5ef6778d9"
	I1222 01:01:52.764017 1552232 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/d5d7912f28eb1c0d675cce774b2ea1157859f0c9883d07d688f4d2a85f789167/kubepods/burstable/podacc13da2e5fa01e94d0ff7d75cf96842/e96b9b93f2f437ad5f40a5cd204ed999a37cc15f45e655f83c9e47e5ef6778d9/freezer.state
	I1222 01:01:52.771968 1552232 api_server.go:204] freezer state: "THAWED"
	I1222 01:01:52.771997 1552232 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1222 01:01:52.780235 1552232 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1222 01:01:52.780278 1552232 status.go:463] multinode-195089 apiserver status = Running (err=<nil>)
	I1222 01:01:52.780291 1552232 status.go:176] multinode-195089 status: &{Name:multinode-195089 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1222 01:01:52.780310 1552232 status.go:174] checking status of multinode-195089-m02 ...
	I1222 01:01:52.780642 1552232 cli_runner.go:164] Run: docker container inspect multinode-195089-m02 --format={{.State.Status}}
	I1222 01:01:52.797874 1552232 status.go:371] multinode-195089-m02 host status = "Running" (err=<nil>)
	I1222 01:01:52.797898 1552232 host.go:66] Checking if "multinode-195089-m02" exists ...
	I1222 01:01:52.798246 1552232 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-195089-m02
	I1222 01:01:52.815938 1552232 host.go:66] Checking if "multinode-195089-m02" exists ...
	I1222 01:01:52.816248 1552232 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 01:01:52.816305 1552232 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-195089-m02
	I1222 01:01:52.836172 1552232 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38520 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/multinode-195089-m02/id_rsa Username:docker}
	I1222 01:01:52.931548 1552232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 01:01:52.944718 1552232 status.go:176] multinode-195089-m02 status: &{Name:multinode-195089-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1222 01:01:52.944765 1552232 status.go:174] checking status of multinode-195089-m03 ...
	I1222 01:01:52.945081 1552232 cli_runner.go:164] Run: docker container inspect multinode-195089-m03 --format={{.State.Status}}
	I1222 01:01:52.965231 1552232 status.go:371] multinode-195089-m03 host status = "Stopped" (err=<nil>)
	I1222 01:01:52.965253 1552232 status.go:384] host is not running, skipping remaining checks
	I1222 01:01:52.965260 1552232 status.go:176] multinode-195089-m03 status: &{Name:multinode-195089-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.42s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-195089 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-195089 node start m03 -v=5 --alsologtostderr: (7.002803616s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-195089 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.07s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (77.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-195089
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-195089
E1222 01:02:19.805132 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-195089: (25.214622259s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-195089 --wait=true -v=5 --alsologtostderr
E1222 01:03:07.826227 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-722318/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-195089 --wait=true -v=5 --alsologtostderr: (52.578144692s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-195089
--- PASS: TestMultiNode/serial/RestartKeepsNodes (77.92s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-195089 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-195089 node delete m03: (5.011532542s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-195089 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.71s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-195089 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-195089 stop: (23.953735009s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-195089 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-195089 status: exit status 7 (105.449658ms)

                                                
                                                
-- stdout --
	multinode-195089
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-195089-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-195089 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-195089 status --alsologtostderr: exit status 7 (91.572109ms)

                                                
                                                
-- stdout --
	multinode-195089
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-195089-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1222 01:03:48.776311 1561085 out.go:360] Setting OutFile to fd 1 ...
	I1222 01:03:48.776538 1561085 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:03:48.776571 1561085 out.go:374] Setting ErrFile to fd 2...
	I1222 01:03:48.776588 1561085 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:03:48.776896 1561085 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1395000/.minikube/bin
	I1222 01:03:48.777129 1561085 out.go:368] Setting JSON to false
	I1222 01:03:48.777192 1561085 mustload.go:66] Loading cluster: multinode-195089
	I1222 01:03:48.777239 1561085 notify.go:221] Checking for updates...
	I1222 01:03:48.777656 1561085 config.go:182] Loaded profile config "multinode-195089": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1222 01:03:48.777703 1561085 status.go:174] checking status of multinode-195089 ...
	I1222 01:03:48.778322 1561085 cli_runner.go:164] Run: docker container inspect multinode-195089 --format={{.State.Status}}
	I1222 01:03:48.799531 1561085 status.go:371] multinode-195089 host status = "Stopped" (err=<nil>)
	I1222 01:03:48.799559 1561085 status.go:384] host is not running, skipping remaining checks
	I1222 01:03:48.799567 1561085 status.go:176] multinode-195089 status: &{Name:multinode-195089 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1222 01:03:48.799598 1561085 status.go:174] checking status of multinode-195089-m02 ...
	I1222 01:03:48.799921 1561085 cli_runner.go:164] Run: docker container inspect multinode-195089-m02 --format={{.State.Status}}
	I1222 01:03:48.818590 1561085 status.go:371] multinode-195089-m02 host status = "Stopped" (err=<nil>)
	I1222 01:03:48.818610 1561085 status.go:384] host is not running, skipping remaining checks
	I1222 01:03:48.818617 1561085 status.go:176] multinode-195089-m02 status: &{Name:multinode-195089-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.15s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (49.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-195089 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-195089 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (48.560959613s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-195089 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (49.28s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (36.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-195089
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-195089-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-195089-m02 --driver=docker  --container-runtime=containerd: exit status 14 (91.779225ms)

                                                
                                                
-- stdout --
	* [multinode-195089-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22179
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22179-1395000/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1395000/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-195089-m02' is duplicated with machine name 'multinode-195089-m02' in profile 'multinode-195089'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-195089-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-195089-m03 --driver=docker  --container-runtime=containerd: (33.415391089s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-195089
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-195089: exit status 80 (364.023006ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-195089 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-195089-m03 already exists in multinode-195089-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_3.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-195089-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-195089-m03: (2.124122815s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (36.05s)

                                                
                                    
x
+
TestScheduledStopUnix (109.9s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-088967 --memory=3072 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-088967 --memory=3072 --driver=docker  --container-runtime=containerd: (33.286828951s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-088967 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1222 01:05:51.905802 1570669 out.go:360] Setting OutFile to fd 1 ...
	I1222 01:05:51.905984 1570669 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:05:51.906011 1570669 out.go:374] Setting ErrFile to fd 2...
	I1222 01:05:51.906032 1570669 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:05:51.906489 1570669 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1395000/.minikube/bin
	I1222 01:05:51.906857 1570669 out.go:368] Setting JSON to false
	I1222 01:05:51.907023 1570669 mustload.go:66] Loading cluster: scheduled-stop-088967
	I1222 01:05:51.907677 1570669 config.go:182] Loaded profile config "scheduled-stop-088967": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1222 01:05:51.907819 1570669 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/scheduled-stop-088967/config.json ...
	I1222 01:05:51.908658 1570669 mustload.go:66] Loading cluster: scheduled-stop-088967
	I1222 01:05:51.908871 1570669 config.go:182] Loaded profile config "scheduled-stop-088967": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-088967 -n scheduled-stop-088967
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-088967 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1222 01:05:52.391570 1570756 out.go:360] Setting OutFile to fd 1 ...
	I1222 01:05:52.391872 1570756 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:05:52.391888 1570756 out.go:374] Setting ErrFile to fd 2...
	I1222 01:05:52.391902 1570756 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:05:52.392253 1570756 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1395000/.minikube/bin
	I1222 01:05:52.392641 1570756 out.go:368] Setting JSON to false
	I1222 01:05:52.392930 1570756 daemonize_unix.go:73] killing process 1570685 as it is an old scheduled stop
	I1222 01:05:52.393095 1570756 mustload.go:66] Loading cluster: scheduled-stop-088967
	I1222 01:05:52.393611 1570756 config.go:182] Loaded profile config "scheduled-stop-088967": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1222 01:05:52.393697 1570756 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/scheduled-stop-088967/config.json ...
	I1222 01:05:52.393949 1570756 mustload.go:66] Loading cluster: scheduled-stop-088967
	I1222 01:05:52.394190 1570756 config.go:182] Loaded profile config "scheduled-stop-088967": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1222 01:05:52.402562 1396864 retry.go:84] will retry after 0s: open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/scheduled-stop-088967/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-088967 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
E1222 01:05:56.758250 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-088967 -n scheduled-stop-088967
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-088967
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-088967 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1222 01:06:18.337953 1571452 out.go:360] Setting OutFile to fd 1 ...
	I1222 01:06:18.338113 1571452 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:06:18.338129 1571452 out.go:374] Setting ErrFile to fd 2...
	I1222 01:06:18.338138 1571452 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:06:18.338524 1571452 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1395000/.minikube/bin
	I1222 01:06:18.339252 1571452 out.go:368] Setting JSON to false
	I1222 01:06:18.339417 1571452 mustload.go:66] Loading cluster: scheduled-stop-088967
	I1222 01:06:18.339889 1571452 config.go:182] Loaded profile config "scheduled-stop-088967": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1222 01:06:18.340015 1571452 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/scheduled-stop-088967/config.json ...
	I1222 01:06:18.340320 1571452 mustload.go:66] Loading cluster: scheduled-stop-088967
	I1222 01:06:18.340551 1571452 config.go:182] Loaded profile config "scheduled-stop-088967": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3

                                                
                                                
** /stderr **
E1222 01:06:29.153926 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/addons-984861/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-088967
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-088967: exit status 7 (67.393236ms)

                                                
                                                
-- stdout --
	scheduled-stop-088967
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-088967 -n scheduled-stop-088967
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-088967 -n scheduled-stop-088967: exit status 7 (74.543475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-088967" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-088967
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-088967: (4.973391089s)
--- PASS: TestScheduledStopUnix (109.90s)

                                                
                                    
x
+
TestInsufficientStorage (9.99s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-933292 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-933292 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (7.346913902s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"1cc35b62-197a-41eb-a838-9dbf2d46c746","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-933292] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b2bed630-6cf2-4dc2-b4da-8a2919535523","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22179"}}
	{"specversion":"1.0","id":"cad26fac-78b9-4db1-950b-c97c9332687a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"606afcef-f921-42b3-a823-f58375ec39ec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22179-1395000/kubeconfig"}}
	{"specversion":"1.0","id":"11dfa193-8797-4ae8-8ef6-488c0967c364","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1395000/.minikube"}}
	{"specversion":"1.0","id":"72f2ce53-a0f0-4d32-bf19-93b268674be1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"df5b720e-14cf-4da0-9185-4180fe29451b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"044d21e8-194b-4834-9959-1c051893383c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"873858e2-128b-4785-924a-b1318ee6417f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"b1591ecf-faef-46f2-90d6-1dbbfd327cb7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"9453b85e-5d10-4b59-b7f9-3aab9502eda9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"337c96c4-ffd2-42a5-b0fc-2a723d02be77","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-933292\" primary control-plane node in \"insufficient-storage-933292\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"cb48b925-5b4b-478f-a53d-f622ea7cb583","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1766219634-22260 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"45325144-202d-441f-ba51-53c7e2dbeaaf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"4f844f36-68f5-46a2-a0de-f4b856acc452","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-933292 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-933292 --output=json --layout=cluster: exit status 7 (306.91795ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-933292","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-933292","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1222 01:07:16.100949 1573295 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-933292" does not appear in /home/jenkins/minikube-integration/22179-1395000/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-933292 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-933292 --output=json --layout=cluster: exit status 7 (310.978854ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-933292","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-933292","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1222 01:07:16.412132 1573361 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-933292" does not appear in /home/jenkins/minikube-integration/22179-1395000/kubeconfig
	E1222 01:07:16.423683 1573361 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/insufficient-storage-933292/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-933292" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-933292
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-933292: (2.027519041s)
--- PASS: TestInsufficientStorage (9.99s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (313.13s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.124109089 start -p running-upgrade-558808 --memory=3072 --vm-driver=docker  --container-runtime=containerd
E1222 01:10:56.756977 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.124109089 start -p running-upgrade-558808 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (33.255930285s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-558808 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1222 01:11:29.153450 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/addons-984861/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:13:07.826747 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-722318/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-558808 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m36.585476893s)
helpers_test.go:176: Cleaning up "running-upgrade-558808" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-558808
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-558808: (2.152968567s)
--- PASS: TestRunningBinaryUpgrade (313.13s)

                                                
                                    
x
+
TestMissingContainerUpgrade (139.26s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.2206326275 start -p missing-upgrade-312354 --memory=3072 --driver=docker  --container-runtime=containerd
E1222 01:07:50.875634 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-722318/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:08:07.826479 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-722318/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.2206326275 start -p missing-upgrade-312354 --memory=3072 --driver=docker  --container-runtime=containerd: (1m1.616443915s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-312354
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-312354
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-312354 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-312354 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m12.898054953s)
helpers_test.go:176: Cleaning up "missing-upgrade-312354" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-312354
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-312354: (1.955432546s)
--- PASS: TestMissingContainerUpgrade (139.26s)

                                                
                                    
x
+
TestPause/serial/Start (62.2s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-712620 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-712620 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m2.199317943s)
--- PASS: TestPause/serial/Start (62.20s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.06s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-712620 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-712620 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (7.051839533s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.06s)

                                                
                                    
x
+
TestPause/serial/Pause (0.7s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-712620 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.70s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.32s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-712620 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-712620 --output=json --layout=cluster: exit status 2 (321.751216ms)

                                                
                                                
-- stdout --
	{"Name":"pause-712620","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-712620","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.32s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.63s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-712620 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.63s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.9s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-712620 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.90s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.79s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-712620 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-712620 --alsologtostderr -v=5: (2.793722634s)
--- PASS: TestPause/serial/DeletePaused (2.79s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.19s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-712620
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-712620: exit status 1 (22.052519ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-712620: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.19s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.07s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.07s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (54.22s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.3660674827 start -p stopped-upgrade-758720 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.3660674827 start -p stopped-upgrade-758720 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (32.843713082s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.3660674827 -p stopped-upgrade-758720 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.3660674827 -p stopped-upgrade-758720 stop: (1.27489951s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-758720 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-758720 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (20.102926081s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (54.22s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2.58s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-758720
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-758720: (2.579917259s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.58s)

                                                
                                    
x
+
TestPreload/Start-NoPreload-PullImage (66.81s)

                                                
                                                
=== RUN   TestPreload/Start-NoPreload-PullImage
preload_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-118542 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd
E1222 01:15:56.756558 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 01:16:29.153491 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/addons-984861/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-118542 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd: (59.715612373s)
preload_test.go:56: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-118542 image pull public.ecr.aws/docker/library/busybox:latest
preload_test.go:56: (dbg) Done: out/minikube-linux-arm64 -p test-preload-118542 image pull public.ecr.aws/docker/library/busybox:latest: (1.003222677s)
preload_test.go:62: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-118542
preload_test.go:62: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-118542: (6.091654247s)
--- PASS: TestPreload/Start-NoPreload-PullImage (66.81s)

                                                
                                    
x
+
TestPreload/Restart-With-Preload-Check-User-Image (48.71s)

                                                
                                                
=== RUN   TestPreload/Restart-With-Preload-Check-User-Image
preload_test.go:72: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-118542 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:72: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-118542 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (48.444698957s)
preload_test.go:77: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-118542 image list
--- PASS: TestPreload/Restart-With-Preload-Check-User-Image (48.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-580182 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-580182 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 14 (116.34238ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-580182] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22179
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22179-1395000/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1395000/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (32.74s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-580182 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E1222 01:18:59.806255 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-580182 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (32.374602074s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-580182 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (32.74s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (16.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-580182 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-580182 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (13.666178008s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-580182 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-580182 status -o json: exit status 2 (326.771068ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-580182","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-580182
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-580182: (2.044828198s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (16.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.71s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-580182 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-580182 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (7.714627087s)
--- PASS: TestNoKubernetes/serial/Start (7.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/linux/arm64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-580182 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-580182 "sudo systemctl is-active --quiet service kubelet": exit status 1 (270.898406ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-580182
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-580182: (1.308107695s)
--- PASS: TestNoKubernetes/serial/Stop (1.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.56s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-580182 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-580182 --driver=docker  --container-runtime=containerd: (6.564670529s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.56s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-580182 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-580182 "sudo systemctl is-active --quiet service kubelet": exit status 1 (299.238233ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-892179 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-892179 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (180.440911ms)

                                                
                                                
-- stdout --
	* [false-892179] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22179
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22179-1395000/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1395000/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1222 01:19:39.450897 1632982 out.go:360] Setting OutFile to fd 1 ...
	I1222 01:19:39.451006 1632982 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:19:39.451017 1632982 out.go:374] Setting ErrFile to fd 2...
	I1222 01:19:39.451023 1632982 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 01:19:39.451290 1632982 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1395000/.minikube/bin
	I1222 01:19:39.451689 1632982 out.go:368] Setting JSON to false
	I1222 01:19:39.452570 1632982 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":115332,"bootTime":1766251047,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1222 01:19:39.452636 1632982 start.go:143] virtualization:  
	I1222 01:19:39.456427 1632982 out.go:179] * [false-892179] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1222 01:19:39.459458 1632982 out.go:179]   - MINIKUBE_LOCATION=22179
	I1222 01:19:39.459576 1632982 notify.go:221] Checking for updates...
	I1222 01:19:39.465179 1632982 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 01:19:39.468078 1632982 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-1395000/kubeconfig
	I1222 01:19:39.470978 1632982 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1395000/.minikube
	I1222 01:19:39.473889 1632982 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1222 01:19:39.476774 1632982 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 01:19:39.480227 1632982 config.go:182] Loaded profile config "kubernetes-upgrade-108800": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1222 01:19:39.480360 1632982 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 01:19:39.503776 1632982 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1222 01:19:39.503910 1632982 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 01:19:39.565716 1632982 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-22 01:19:39.555361507 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1222 01:19:39.565822 1632982 docker.go:319] overlay module found
	I1222 01:19:39.569007 1632982 out.go:179] * Using the docker driver based on user configuration
	I1222 01:19:39.571789 1632982 start.go:309] selected driver: docker
	I1222 01:19:39.571809 1632982 start.go:928] validating driver "docker" against <nil>
	I1222 01:19:39.571824 1632982 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 01:19:39.575316 1632982 out.go:203] 
	W1222 01:19:39.578234 1632982 out.go:285] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1222 01:19:39.581127 1632982 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-892179 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-892179

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-892179

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-892179

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-892179

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-892179

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-892179

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-892179

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-892179

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-892179

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-892179

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-892179"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-892179"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-892179"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-892179

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-892179"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-892179"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-892179" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-892179" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-892179" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-892179" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-892179" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-892179" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-892179" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-892179" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-892179"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-892179"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-892179"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-892179"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-892179"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-892179" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-892179" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-892179" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-892179"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-892179"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-892179"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-892179"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-892179"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 22 Dec 2025 01:09:26 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-108800
contexts:
- context:
cluster: kubernetes-upgrade-108800
user: kubernetes-upgrade-108800
name: kubernetes-upgrade-108800
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-108800
user:
client-certificate: /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/kubernetes-upgrade-108800/client.crt
client-key: /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/kubernetes-upgrade-108800/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-892179

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-892179"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-892179"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-892179"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-892179"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-892179"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-892179"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-892179"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-892179"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-892179"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-892179"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-892179"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-892179"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-892179"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-892179"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-892179"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-892179"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-892179"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-892179"

                                                
                                                
----------------------- debugLogs end: false-892179 [took: 3.398020533s] --------------------------------
helpers_test.go:176: Cleaning up "false-892179" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p false-892179
--- PASS: TestNetworkPlugins/group/false (3.78s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (70.8s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-433815 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
E1222 01:23:07.826231 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-722318/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-433815 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (1m10.801965675s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (70.80s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (57.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-778490 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-778490 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3: (57.209834475s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (57.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.54s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-433815 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [12a4cfff-e0c4-46f0-bdf1-e8ea93df0eb3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [12a4cfff-e0c4-46f0-bdf1-e8ea93df0eb3] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.00496328s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-433815 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.54s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-433815 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-433815 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.294144026s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-433815 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.43s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-433815 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-433815 --alsologtostderr -v=3: (12.326713062s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-778490 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [f1436a94-e0e6-4546-859e-c51a0c99d04b] Pending
helpers_test.go:353: "busybox" [f1436a94-e0e6-4546-859e-c51a0c99d04b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [f1436a94-e0e6-4546-859e-c51a0c99d04b] Running
E1222 01:24:30.876275 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-722318/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.003823011s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-778490 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-778490 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-778490 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.05s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-778490 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-778490 --alsologtostderr -v=3: (12.35515151s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.36s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-433815 -n old-k8s-version-433815
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-433815 -n old-k8s-version-433815: exit status 7 (109.886927ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-433815 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (51.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-433815 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-433815 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (50.817542191s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-433815 -n old-k8s-version-433815
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (51.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-778490 -n default-k8s-diff-port-778490
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-778490 -n default-k8s-diff-port-778490: exit status 7 (148.162233ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-778490 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (53.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-778490 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-778490 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3: (52.580571804s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-778490 -n default-k8s-diff-port-778490
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (53.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-nnxdg" [686559ce-1d84-44db-acf9-a6eacd904269] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004181995s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-nnxdg" [686559ce-1d84-44db-acf9-a6eacd904269] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004189124s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-433815 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-433815 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.62s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-433815 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 pause -p old-k8s-version-433815 --alsologtostderr -v=1: (1.073069494s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-433815 -n old-k8s-version-433815
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-433815 -n old-k8s-version-433815: exit status 2 (398.576297ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-433815 -n old-k8s-version-433815
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-433815 -n old-k8s-version-433815: exit status 2 (343.739341ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-433815 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-433815 -n old-k8s-version-433815
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-433815 -n old-k8s-version-433815
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.62s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-d4szh" [839a8533-8096-4794-b56f-553ebb387b89] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00528041s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (69.66s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-980842 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-980842 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3: (1m9.66097261s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (69.66s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-d4szh" [839a8533-8096-4794-b56f-553ebb387b89] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004447266s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-778490 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-778490 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.63s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-778490 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-778490 --alsologtostderr -v=1: (1.299375992s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-778490 -n default-k8s-diff-port-778490
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-778490 -n default-k8s-diff-port-778490: exit status 2 (392.856277ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-778490 -n default-k8s-diff-port-778490
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-778490 -n default-k8s-diff-port-778490: exit status 2 (347.070434ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-778490 --alsologtostderr -v=1
E1222 01:25:56.756757 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-778490 -n default-k8s-diff-port-778490
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-778490 -n default-k8s-diff-port-778490
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.63s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-980842 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [4549fcb8-15b6-464e-934d-e028efa6e999] Pending
helpers_test.go:353: "busybox" [4549fcb8-15b6-464e-934d-e028efa6e999] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [4549fcb8-15b6-464e-934d-e028efa6e999] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003329017s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-980842 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-980842 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-980842 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.016720124s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-980842 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-980842 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-980842 --alsologtostderr -v=3: (12.179360529s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-980842 -n embed-certs-980842
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-980842 -n embed-certs-980842: exit status 7 (74.952306ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-980842 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (52.93s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-980842 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3
E1222 01:28:07.825933 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-722318/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-980842 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3: (52.550702723s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-980842 -n embed-certs-980842
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (52.93s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-665sb" [fa8ff55a-15bd-4c85-853f-9b55910c0b57] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.020518156s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-665sb" [fa8ff55a-15bd-4c85-853f-9b55910c0b57] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003520296s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-980842 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-980842 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-980842 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-980842 -n embed-certs-980842
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-980842 -n embed-certs-980842: exit status 2 (346.485375ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-980842 -n embed-certs-980842
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-980842 -n embed-certs-980842: exit status 2 (355.154609ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-980842 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-980842 -n embed-certs-980842
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-980842 -n embed-certs-980842
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (1.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-154186 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-154186 --alsologtostderr -v=3: (1.316692953s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (1.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-154186 -n no-preload-154186
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-154186 -n no-preload-154186: exit status 7 (86.576328ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-154186 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-869293 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-869293 --alsologtostderr -v=3: (1.30490225s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-869293 -n newest-cni-869293
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-869293 -n newest-cni-869293: exit status 7 (69.270277ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-869293 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-869293 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestPreload/PreloadSrc/gcs (6.93s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/gcs
preload_test.go:109: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-dl-gcs-904116 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=containerd
preload_test.go:109: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-dl-gcs-904116 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=containerd: (6.736175332s)
helpers_test.go:176: Cleaning up "test-preload-dl-gcs-904116" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-dl-gcs-904116
--- PASS: TestPreload/PreloadSrc/gcs (6.93s)

                                                
                                    
x
+
TestPreload/PreloadSrc/github (7.67s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/github
preload_test.go:109: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-dl-github-717709 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=containerd
preload_test.go:109: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-dl-github-717709 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=containerd: (7.439189972s)
helpers_test.go:176: Cleaning up "test-preload-dl-github-717709" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-dl-github-717709
--- PASS: TestPreload/PreloadSrc/github (7.67s)

                                                
                                    
x
+
TestPreload/PreloadSrc/gcs-cached (0.9s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/gcs-cached
preload_test.go:109: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-dl-gcs-cached-768046 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=containerd
helpers_test.go:176: Cleaning up "test-preload-dl-gcs-cached-768046" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-dl-gcs-cached-768046
--- PASS: TestPreload/PreloadSrc/gcs-cached (0.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (46.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-892179 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-892179 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (46.59450248s)
--- PASS: TestNetworkPlugins/group/auto/Start (46.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-892179 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-892179 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-72rch" [8e076e27-2103-4842-b1a0-2f9ce15f9887] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-72rch" [8e076e27-2103-4842-b1a0-2f9ce15f9887] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.003773095s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-892179 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-892179 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-892179 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (54.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-892179 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-892179 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (54.105283752s)
--- PASS: TestNetworkPlugins/group/flannel/Start (54.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:353: "kube-flannel-ds-4fksf" [def05d11-b66e-4bd8-a44f-660dc194dc63] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00364693s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-892179 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-892179 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-5dw4c" [dabb45b5-d185-4cf0-812d-b615ebcd482d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-5dw4c" [dabb45b5-d185-4cf0-812d-b615ebcd482d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.003247414s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-892179 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-892179 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-892179 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (63.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-892179 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-892179 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m3.95357249s)
--- PASS: TestNetworkPlugins/group/calico/Start (63.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:353: "calico-node-gx6d4" [b9f20435-abbf-4b22-8d09-e9c09394f15b] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:353: "calico-node-gx6d4" [b9f20435-abbf-4b22-8d09-e9c09394f15b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004475383s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-892179 "pgrep -a kubelet"
I1222 01:49:11.841065 1396864 config.go:182] Loaded profile config "calico-892179": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-892179 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-p5fqm" [b671c040-0447-423d-a4e9-88f69de6616f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-p5fqm" [b671c040-0447-423d-a4e9-88f69de6616f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.004039671s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-892179 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-892179 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-892179 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (58.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-892179 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-892179 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (58.772316422s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (58.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-892179 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-892179 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-6j7zj" [8a06e66c-f303-4fe4-b38f-28216d680072] Pending
helpers_test.go:353: "netcat-cd4db9dbf-6j7zj" [8a06e66c-f303-4fe4-b38f-28216d680072] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-6j7zj" [8a06e66c-f303-4fe4-b38f-28216d680072] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.003720644s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-892179 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-892179 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-892179 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (53.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-892179 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-892179 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (53.569145781s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (53.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:353: "kindnet-6qtgg" [a80f3a6e-5199-418b-b9fb-985a6a1b5654] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004025649s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-892179 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (8.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-892179 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-wnhbb" [21f2affe-906f-4424-8525-fc5e45070c76] Pending
helpers_test.go:353: "netcat-cd4db9dbf-wnhbb" [21f2affe-906f-4424-8525-fc5e45070c76] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 8.004032344s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (8.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-892179 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-892179 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-892179 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (43.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-892179 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-892179 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (43.862497236s)
--- PASS: TestNetworkPlugins/group/bridge/Start (43.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-892179 "pgrep -a kubelet"
I1222 01:53:31.094409 1396864 config.go:182] Loaded profile config "bridge-892179": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-892179 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-p68j7" [f5767a29-e2ba-487b-a156-a6e860b7c2f0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-p68j7" [f5767a29-e2ba-487b-a156-a6e860b7c2f0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.003835271s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-892179 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-892179 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-892179 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (79.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-892179 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-892179 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m19.009524611s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (79.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-892179 "pgrep -a kubelet"
I1222 01:55:20.645487 1396864 config.go:182] Loaded profile config "enable-default-cni-892179": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-892179 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-ldx6k" [9dfdb517-16d5-4852-967a-c7c6a8b69ca5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-ldx6k" [9dfdb517-16d5-4852-967a-c7c6a8b69ca5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.004052622s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-892179 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-892179 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-892179 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    

Test skip (38/421)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.3/cached-images 0
15 TestDownloadOnly/v1.34.3/binaries 0
16 TestDownloadOnly/v1.34.3/kubectl 0
23 TestDownloadOnly/v1.35.0-rc.1/cached-images 0
24 TestDownloadOnly/v1.35.0-rc.1/binaries 0
25 TestDownloadOnly/v1.35.0-rc.1/kubectl 0
29 TestDownloadOnlyKic 0.46
31 TestOffline 0
42 TestAddons/serial/GCPAuth/RealCredentials 0
49 TestAddons/parallel/Olm 0
56 TestAddons/parallel/AmdGpuDevicePlugin 0
60 TestDockerFlags 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
112 TestFunctional/parallel/MySQL 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
154 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0
155 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
156 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0
207 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL 0
211 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv 0
212 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv 0
224 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig 0
225 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
226 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS 0
261 TestGvisorAddon 0
283 TestImageBuild 0
284 TestISOImage 0
348 TestChangeNoneUser 0
351 TestScheduledStopWindows 0
353 TestSkaffold 0
377 TestStartStop/group/disable-driver-mounts 0.2
395 TestNetworkPlugins/group/kubenet 3.58
403 TestNetworkPlugins/group/cilium 4.1
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0-rc.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0-rc.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-rc.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.46s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-046842 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:176: Cleaning up "download-docker-046842" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-046842
--- SKIP: TestDownloadOnlyKic (0.46s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:761: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1035: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-459348" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-459348
--- SKIP: TestStartStop/group/disable-driver-mounts (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-892179 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-892179

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-892179

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-892179

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-892179

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-892179

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-892179

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-892179

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-892179

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-892179

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-892179

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-892179"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-892179"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-892179"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-892179

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-892179"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-892179"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-892179" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-892179" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-892179" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-892179" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-892179" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-892179" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-892179" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-892179" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-892179"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-892179"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-892179"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-892179"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-892179"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-892179" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-892179" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-892179" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-892179"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-892179"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-892179"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-892179"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-892179"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 22 Dec 2025 01:09:26 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-108800
contexts:
- context:
cluster: kubernetes-upgrade-108800
user: kubernetes-upgrade-108800
name: kubernetes-upgrade-108800
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-108800
user:
client-certificate: /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/kubernetes-upgrade-108800/client.crt
client-key: /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/kubernetes-upgrade-108800/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-892179

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-892179"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-892179"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-892179"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-892179"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-892179"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-892179"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-892179"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-892179"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-892179"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-892179"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-892179"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-892179"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-892179"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-892179"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-892179"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-892179"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-892179"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-892179"

                                                
                                                
----------------------- debugLogs end: kubenet-892179 [took: 3.40417843s] --------------------------------
helpers_test.go:176: Cleaning up "kubenet-892179" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-892179
--- SKIP: TestNetworkPlugins/group/kubenet (3.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-892179 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-892179

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-892179

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-892179

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-892179

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-892179

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-892179

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-892179

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-892179

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-892179

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-892179

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-892179"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-892179"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-892179"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-892179

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-892179"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-892179"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-892179" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-892179" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-892179" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-892179" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-892179" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-892179" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-892179" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-892179" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-892179"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-892179"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-892179"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-892179"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-892179"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-892179

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-892179

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-892179" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-892179" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-892179

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-892179

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-892179" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-892179" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-892179" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-892179" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-892179" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-892179"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-892179"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-892179"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-892179"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-892179"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 22 Dec 2025 01:09:26 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-108800
contexts:
- context:
cluster: kubernetes-upgrade-108800
user: kubernetes-upgrade-108800
name: kubernetes-upgrade-108800
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-108800
user:
client-certificate: /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/kubernetes-upgrade-108800/client.crt
client-key: /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/kubernetes-upgrade-108800/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-892179

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-892179"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-892179"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-892179"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-892179"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-892179"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-892179"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-892179"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-892179"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-892179"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-892179"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-892179"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-892179"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-892179"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-892179"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-892179"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-892179"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-892179"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-892179" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-892179"

                                                
                                                
----------------------- debugLogs end: cilium-892179 [took: 3.943515896s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-892179" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-892179
--- SKIP: TestNetworkPlugins/group/cilium (4.10s)

                                                
                                    
Copied to clipboard